entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 10
200
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 2
817k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2306.04299v1
|
20230607100216
|
Timing Process Interventions with Causal Inference and Reinforcement Learning
|
[
"Hans Weytjens",
"Wouter Verbeke",
"Jochen De Weerdt"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Timing Process Interventions with CI and RL
H. Weytjens et al.
Research Centre for Information Systems Engineering (LIRIS),
Faculty of Economics and Business,
KU Leuven, Leuven, Belgium
{hans.weytjens,wouter.verbeke,jochen.deweerdt}@kuleuven.be
Timing Process Interventions with Causal Inference and Reinforcement Learning
Hans Weytjens10000-0003-4985-0367 Wouter Verbeke2,30000-0002-8438-0535 Jochen De Weerdt30000-0001-6151-0504
July 31, 2023
===============================================================================================================
The shift from the understanding and prediction of processes to their optimization offers great benefits to businesses and other organizations. Precisely timed process interventions are the cornerstones of effective optimization. Prescriptive process monitoring (PresPM) is the sub-field of process mining that concentrates on process optimization. The emerging PresPM literature identifies state-of-the-art methods, causal inference (CI) and reinforcement learning (RL), without presenting a quantitative comparison. Most experiments are carried out using historical data, causing problems with the accuracy of the methods' evaluations and preempting online RL. Our contribution consists of experiments on timed process interventions with synthetic data that renders genuine online RL and the comparison to CI possible, and allows for an accurate evaluation of the results. Our experiments reveal that RL's policies outperform those from CI and are more robust at the same time. Indeed, the RL policies approach perfect policies. Unlike CI, the unaltered online RL approach can be applied to other, more generic PresPM problems such as next best activity recommendations. Nonetheless, CI has its merits in settings where online learning is not an option.
§ INTRODUCTION
Moving from predicting the outcome of a running process to optimizing it with respect to a goal implies making decisions about actions that will change its course. In its most basic form, the optimization of a process assumes a correctly timed intervention (or sometimes non-intervention) in it. Examples range from the escalation of a customer complaint process to higher management echelons, the maintenance of a machine, a customer call to speed up an administrative process or to maximize turnover, an additional test to conduct to reduce a patient's length of stay at hospitals, etc. Prescriptive Process Monitoring (PresPM) is a young subfield of Process Mining (PM) studying business process optimization methods. Optimization in the PresPM context concerns decisions an agent has to take to optimize the outcome of a running case given certain goals (metrics). It does not concern enhancing the underlying process itself, as practiced in PM. Our PresPM (see below) literature review reveals that two methods, reinforcement learning (RL) <cit.> and causal inference (CI) <cit.>, emerge as pathways. However, a quantitative comparison is currently missing. Most of the research on PresPM works with offline historical data creating two limitations: Online RL is not possible and experimental results prove difficult to quantify accurately for lack of counterfactuals.
This research gap defines our contribution. In our experiments, we introduce online RL to business processes and benchmark it against CI. Our use of synthetic data, rather than historical event logs as in earlier PresPM research, is not only instrumental in permitting both online RL and CI, but also enables deeper insights, a correct evaluation of the experiments' results, and the calculation of perfect policies as an absolute benchmark.
Solutions to timed process interventions can be seen as a gateway to solving the more generic problem of recommending the next best activities in a process. In timed interventions, the agent has one chance to make an intervention sometime during the process, whereas, in next best activity problems, the agent has to choose between all possible activities at every step in the process. Both problem types are structurally the same, the RL algorithms for the former can be transferred to the latter problem without modification. The relative simplicity of timed process interventions in terms of combinatorial possibilities (state space) results in fewer data and computational requirements. It will permit easier insights into the characteristics of the used models and find faster real-world adoption. Furthermore, a vast number of relevant applications for timed process interventions exist. For these reasons, our experiments focus on timed process interventions, rather than next best activity recommendations.
This paper is structured as follows. Section <ref> introduces concepts pertaining to PresPM and refers to related work around them. We then move on to the experimental Section <ref> comparing RL to CI using two synthetic datasets. The insights gained from the literature review and experiments lead to a deeper discussion of the two PresPM methods in Section <ref>. We conclude this paper and suggest avenues for future work in Section <ref>.
§ BACKGROUND AND RELATED WORK
§.§ Preliminaries
The goal of PresPM <cit.> is to recommend actions for ongoing cases in order to optimize their outcomes as measured by a certain metric. Before the appearance of ML, most prescriptive problems relating to processes concerned industrial processes and were approached by operations research methods, requiring mathematical models <cit.> of the problems. Later, ML opened new, model-free opportunities, leading to a substantial body of research about its application in predictive maintenance and process control (regulating a system to keep certain parameters within a defined range). For overviews, the interested reader is referred to <cit.> and <cit.>. Within PresPM, a much more recent discipline, CI and RL emerged as two promising methods for process outcome optimization and will be the subject of this research.
The business processes studied in PresPM exhibit a substantial degree of variation, in practice often explained by the humans in the loop. We believe that the high degree of variation in these processes makes them generic, and hence, the results of this work also apply to more constrained or structured processes such as industrial processes.
In this work, we restrict our focus to the optimization of a single process in isolation. This is an important assumption given that in practice, many processes affect and even interact with each other. Action recommendations may impact each other e.g., in case of limited resources. To the best of our knowledge, the very large majority of PresPM research assumes process independence. We also assume that decision points in processes are known after either consulting experts and/or applying PM techniques to identifying decision rules <cit.> and causal relationships (e.g., <cit.>, <cit.>, <cit.>, <cit.>.
§.§ Causal inference
By default, CI <cit.> works with offline, logged data. The field can be subdivided into two components. The first concerns the detection of causal relationships: “Which treatment(s) have an effect on the process' outcomes?”. The second CI component involves estimating the effect of treatments. We concentrate on the individual treatment effect (ITE) <cit.>, which is the difference between predicted outcomes of (possible) treatment(s) and non-treatment for a given sample. For example, when our model predicts that calling (treatment) customer x will increase revenue by 200, while x is expected to reduce sales by 100 if not called (non-treatment), then the ITE_x is 300. Note that the ITE is an expectation, not a hard-coded causality. Usually, a threshold (e.g., 50 in our example) is determined to arrive at a policy for selecting (non-) treatments. The main challenge is the absence of counterfactuals in the dataset. A counterfactual is the unobserved outcome of a case assuming another treatment than the one factually applied (not to be confused with negative or forbidden events or traces as in <cit.>). In the absence of randomized controlled trials (RCTs), realized by a policy of random interventions, selection bias will occur as the data-gathering policy leads to different distributions of treatment and non-treatment samples in the datasets. Combating selection bias is an important aspect of CI (e.g., <cit.>). Real-world CI applications include marketing (e.g., churn reduction: <cit.>, discounting, ...), education (e.g., <cit.>), recommender systems, etc. Most of these applications, however, are cross-sectional rather than longitudinal: There are no timing issues, let alone sequential treatments as seen in processes.
To the best of our knowledge, CI plays no role in the optimization of industrial processes. In line with the overview provided by <cit.>, we found no papers in the sparse PresPM literature published before 2020 claiming to use CI for process outcome optimization. <cit.> and <cit.> did apply a form of indirect CI with that aim, albeit without carrying the CI label. The indirect approach consists of first predicting the most likely (or distribution of) suffix(es) for every possible treatment given a certain prefix. In the second step, another model predicts outcomes for all these suffixes, which will then be used to choose a treatment. In the direct CI approach, the process outcomes for all possible treatments for a given prefix are directly predicted. Direct CI implementations can be found in <cit.> (without timing considerations) and <cit.> and <cit.> (including timing). With the exception of <cit.>, none of the PresPM papers addresses selection bias. <cit.>, in contrast, use a sequence-to-sequence recurrent NN that automatically builds a treatment-invariant representation of the prefixes to combat the selection bias in a medical treatment problem.
The lack of counterfactuals in the test set stemming from the use of offline data hinders the accurate evaluation of CI methods' results: For a given prefix, the action recommended by the CI model may be absent from the cases in the test set. Researchers cope with this problem by relying on a predictive model to estimate outcomes, a distance-minimizing algorithm to find the nearest case in the training or dataset, or a generative model that produces augmented data <cit.>.
§.§ Reinforcement learning
RL <cit.> is an important class of ML algorithms learning policies that guide an agent's behavior or sequence of actions in an environment in order to maximize an expected cumulative reward. Early successes in computer games drew much attention to RL, which has since then expanded not only into industrial processes but also into many other fields such as robotics (e.g., autonomous driving: <cit.>), healthcare <cit.>, engineering, finance, etc. RL comes in many flavors. We will discuss and use the widespread Q-learning variant. In processes, the most important reward is often the process outcome that becomes known at the conclusion (last event) of the case. Regardless, intermediate rewards could be easily included in RL should they occur. The cost of actions can be viewed as a negative reward. At its core, RL assumes an online environment that the agent can interact with. RL does not need an environment (→ process) model. Instead, real (or simulated) episodes (→ cases) are just executed and their rewards (→ outcomes) are observed. For every encountered state (→ prefix), a state-action value (Q) is learned for every possible action. Q represents the state-action value for the next state (→ prefix) plus the reward minus the cost of that action to get to that next state (a transition). For any given prefix, the state-action values can be interpreted similarly as the effects of the possible treatments learned by CI. The difference between the state-action value for a treatment and the one for the non-treatment corresponds to the ITE at that state (prefix). The state-action value of the last prefix of a (completed) process is its final outcome. Given the size of the state space (→ number of possible prefixes) in most processes, these state-action values cannot be stored in tabular form (Q-table). Instead, they are approximated by an NN. This is called deep reinforcement learning <cit.>. At every state (→ prefix), the policy will be to choose the action with the highest relative state-action value. Learning is achieved by playing out many processes and iteratively updating the Q-table NN after each (batch of) observed rewards (→ outcomes). In order to explore all areas of the state space and to prevent prematurely settling into a sub-optimal policy, a certain degree of exploration is introduced: The agent will sometimes overrule the policy and choose another action, especially at the beginning of the learning process. RL has found many applications in process outcome optimization, e.g., in robotics <cit.> and industrial process control <cit.>, but few researchers <cit.> apply RL to PresPM process optimization.
In practice, the real-life form of data gathering is often too slow and too expensive. It can even be dangerous at the early stages of learning when the NN is insufficiently trained and significant exploration happens. An entire spectrum of alternative data-gathering methods at different proximities to reality exists. In academia, synthetic data are instrumental in investigating and comparing methods. For instance, <cit.> work with synthetic data for industrial process control. Simulation models can be rooted in the laws of physics or even social sciences. <cit.> use simulation in industrial processes, <cit.> use simulation to train robots and investigate pathways to close the reality gap, the mismatch between the reality and the simulation. In <cit.>, a Fogg behavior model is used for healthy habit formation for cancer patients. Digital twins <cit.> are extended simulations benefiting from imputed real-time data. <cit.> deploy a miniature factory in which their agent can act. In practice and academia alike, there is great interest in offline RL i.e. to work with data from existing datasets (supervised data). This can be achieved by mining models from the data. PM discovery techniques, for example, yield grid graphs of business processes as representations of the agent's environment in <cit.> and <cit.>. Research on offline RL <cit.> also suggests using predictive models trained on the dataset to guide the agent through its environment and estimate outcomes, somehow similar to indirect CI. Alternatively, nearest-neighbors algorithms can force the agent to remain in the vicinity of the data-gathering policy. The RL agent can even be forced to remain within the boundaries of the dataset, which of course makes it harder to improve upon the original data-gathering policy. With the exception of purely synthetic data, these data-gathering strategies can all be used to derive a policy with which to initialize an online learning agent. This dual strategy greatly speeds up training, avoids expenses, and minimizes mistakes.
§.§ Problem complexity
From the analysis of the related work, we identify two main drivers for problem complexity: action width and action depth. These drivers facilitate understanding problems at hand and will be helpful in positioning our experiments (Section <ref>) and the subsequent discussion of CI and RL (Section <ref>).
Action width relates to the number of different actions (set size) available to an agent: Actions can be binary (interventions e.g., “apply” or “don't apply”), continuous (e.g., a regulating valve) or multi-class (e.g., several options such as “visit”, “call” or “email” customer or “do nothing”). In this paper, we define multi-class actions as treatments. Interventions are thus a binary subclass of treatments. Asymptotically, the set of multi-class actions becomes the set of all possible activities, even including attributes, as in the next best activity prediction. The action depth is a measure of the longitudinal dimension and depends on how many consecutive actions can be taken within a process, and when. An action's timing can be predetermined (fixed or irrelevant) or an action can happen once at any time during the process' lifetime (one-off, as in timed process interventions). Sequences of (repeated) actions are another, more complicated, setting. Finally, actions can be continuous such as steering an autonomous vehicle to keep it in its lane.
§.§ Research gap
There exists no quantitative comparative analysis of CI and RL process outcome optimizations (nor any other) problems. This is the main research gap we address in this paper. As explained in subsection <ref>, the use of historical data for the test sets hampers the evaluation of CI methods for lack of counterfactuals. A similar issue appears in the RL literature that exhibits a prevalence of non-real-life work. Here, simulations or models based on reality are used to train and test online models without considering the performance on the original problems, thus ignoring the reality gap (exception: <cit.>). We also address this issue by making use of entirely artificial synthetic data in our experiments. This form of data allows us to accurately evaluate CI, to test online RL and eliminate the reality gap, and to share the same test set between both methods. Additionally, none of the aforementioned papers compared their results to perfect policy results needed to gain an intuition for the absolute performance of their methods. This can be explained by the majority of the discussed papers treating rather complex problems for which computing such a perfect policy is intractable, hence the need for techniques such as CI and RL. We opt for timed interventions, which have narrow action widths (binary) and shallow action depths (at most once per process) so that we can easily compute results for a perfect policy. The next Section <ref> describes our contribution: making an accurately evaluated CI-RL-perfect-solution comparison based on synthetic datasets.
§ EXPERIMENTAL COMPARISON OF CI AND RL
In the following three subsections, we describe our data generation, experimental setup, and results.
§.§ Data generation
We work with synthetic processes generating the environment and data for our experiments in order to compute counterfactuals that are not available in real-world data. Knowing the counterfactuals allows for accurate evaluations of the experiments. Moreover, given a sufficiently small state space, a perfect policy can be derived and used to judge the absolute performance of methods. The same synthetic generative model can create both the offline dataset for CI and the online environment required for online RL. We first describe the two processes and then motivate our choice.
§.§.§ Two synthetic processes
The process models as Petri nets and key features of our two synthetic processes are shown in Figure <ref> and Table <ref> respectively. is a sequence of three activities, either “A” or “B” with an according integer attribute. At one of the three events, a (free) intervention can be made. The outcome of the process is the sum of the attributes, where the attribute of the event where the intervention took place is multiplied by 2 if activity “A” occurred at least once in the process, otherwise by -2. consists of five events and includes both an AND and an XOR construct. Every case carries an integer case attribute known from the start. Event attributes are integers as well, and an intervention can be made once in a process at any event. When an intervention is made (at a cost of 5), the attribute corresponding to the first of “D1” or “D2” to occur thereafter (if any) will be multiplied by 2 in case the process passes through its “B” branch, by -4 otherwise. The final outcome is the sum of the attributes times the case attribute.
§.§.§ Motivation
These processes and interventions were designed to be simple for clear insights, yet representative of real-world processes by incorporating their main challenges. Since PresPM concerns actionable decisions, we can reduce sub-processes that do not contain any decision points (and are not a branch of a parallel structure with another branch containing such a decision point) into one event (e.g., the subprocess “R-V” collapses into event “A” in Fig. <ref>), thus significantly shrinking the process model. In our experiments, the interventions only change event attributes but in reality, they may alter the control-flow as well. That would not change the CI and RL algorithms. Moreover, when all control-flow variations starting from a given decision point (after event “W” in Fig. <ref>) merge together in one location/activity later in the process model (“Z”) without containing any further intermediate decision points, then they can be reduced to one event (“B”) as well. The value of this event's attribute will vary according to which decision was made and which control-flow variant was followed earlier.
Both processes have a strong stochastic component to reflect the uncertainty accompanying real-life processes. The values of the three activities and attributes in are sampled from probability distributions, whereas activities in are governed by the given structure, with the attributes and case variables sampled from probability distributions as well. A real-life decision-maker is not only confronted by stochasticity, but the information available to make decisions may also differ between cases. Our synthetic processes also incorporate this aspect: as long as no “A” appears in or hasn't passed through its “B” or “C” branch, it cannot be known for sure whether intervening will be beneficial or detrimental.
As in many real-world processes, the outcomes of both processes will only be known at their conclusion. Including intermediary rewards or penalties, however, would not significantly alter the CI or RL algorithms.
Our experiments investigate binary actions (interventions). This simplification allows for clearer insights without loss of generalization. As direct CI is generally not suited for sequences of actions, we further simplified by opting for one-off actions (timed interventions) to permit a CI-RL comparison; the RL method, however, can be extended to sequential or continuous actions without modification. In combination with the use of synthetic data, the small state space resulting from a narrow action width and shallow action depth also renders calculating perfect policies practical.
§.§ Experimental setup
Even though increasingly performant next event prediction algorithms exist (e.g., <cit.>), the indirect CI approach inevitably compounds the errors of two successive prediction models. The direct CI approach circumvents the next event/suffix prediction stage. The simplicity of working with one model favors the direct approach and we used it in our experiments. We use one NN to predict process outcomes for CI; the intervention (Boolean) is part of its inputs, and the batch size is 1,024. RL is achieved with a standard Q-learning architecture with a 1,024-transition samples memory for stabilization (experience replay: <cit.>). Transition samples are retained in the memory according to the first-in-first-out (FIFO) principle. A penalty of 100 is applied for intervening more than once. An NN predicts Q for both possible actions (“intervention” and “non-intervention”) at every encountered state (prefix). For a balanced comparison, the same NN architecture is used for both CI and RL. Our NNs have two LSTM and two dense layers as displayed in Fig <ref>.
For the CI learning phase, an RCT dataset of 10,000 samples is generated. This largely exceeds both processes' state space size and should, therefore, offset CI's offline handicap. For RL, data are generated on the fly. The test set consists of 1,000 samples for which all counterfactuals are computed (feasible thanks to the relatively simple processes and the binary one-off action design). The data generated by the synthetic processes are preprocessed as follows: The activity levels are one-hot encoded. The outcomes, attributes, and case variables () are standardized. For CI, the intervention decision (1 or 0) is concatenated with the other event features. For every case sample, we build a sequence (sequence length = total process length) for every prefix, using padding to complete the sequence for ongoing process instances. We thus arrive at a two-dimensional data structure that is fed into the models' input layer. For , the case variable enters the models separately after the LSTM layers.
Every experiment is carried out five times and learning stopped using an early-stopping algorithm for both methods. A policy based on CI requires identifying the threshold, which we identify as the value that maximizes the ITE score on a 20% validation set. Uplift <cit.> is the metric to evaluate the results. It is the difference between the process outcomes of implementing the policy and not intervening at all, cumulated over the complete test set. The experimental settings are summarized in Table <ref>.
§.§ Results
We summarized our experimental results in Table <ref>. RL clearly outperforms CI for both processes: The mean scores significantly higher. The standard deviations of the RL scores are much lower, making RL by far the more robust method. Most or all of this outperformance can be attributed to RL's innate superior ability to find the optimal policy (see Section <ref>). The fact that online RL permits exploring all parts of the state space plays virtually no role here, as the CI training sets in our experiments contain the complete state space as well. Were this not the case, the observed CI-RL divergence would certainly widen.
The precise knowledge of the (stochastic) synthetic generative processes enables computing perfect policies. This is done by drawing the complete state space in tree form and then calculating the best policy (intervene/don't intervene) from the leaves (512 or 720 for our processes) back to the prefixes of length one, always assuming no intervention happened before. Table <ref> shows that RL comes to within 3% of the perfect policy results for both processes (some stochasticity is normal). The CI policy constitutes a substantial improvement over the RCT data-gathering policy that originally created the dataset as well, albeit to a lesser extent than RL.
Having set both RL's memory and CI's batch size to 1,024, one optimization step of the NN involves the same number of samples for both methods. Since every RL transition (except for the first 1,023 ones) was followed by an NN optimization step, we can directly compare the number of RL transitions to the number of CI epochs. Table <ref> shows that RL's computational requirements are two orders of magnitude higher than those for CI.
§ DISCUSSION
In this section, we discuss the suitability of CI and RL for PresPM and show why RL outperformed CI in our experiments. We also address the issues of RL's online requirement, reward specification, and inefficiency.
§.§.§ Causal inference
Learning counterfactuals and treatment effects is at the core of CI. The sequential aspect of processes, however, poses a problem: The decision to not treat at a certain time in a running process does not preclude treatments later on in the process. For any given prefix in our experiments, direct CI relied on a predictive model to estimate the process outcomes for both intervention and non-intervention. This is problematic in the latter case: The predictive model cannot discern the optimal path from that prefix, and will instead consider the outcomes for all encountered treatments under the data-gathering policy that produced the relevant samples in the training set, as illustrated in the simplified example in Figure <ref>.
Direct CI, therefore, only operates safely on problems without any action depth (“fixed” or “irrelevant”), and will become increasingly suboptimal when moving to real processes with action depths “once” or “multiple”. The action width for CI realistically comprises “binary” and “multi-class” treatments. CI cannot handle permanently-running processes. Thresholds are sub-optimal compromises and products of optimization algorithms themselves. Dependencies between processes, e.g., when resources (space, manpower) are limited or processes interact with each other, cannot be incorporated in the CI framework. Because of these deficits, optimal policies are theoretically out of CI's reach, as confirmed in our experiments. Nevertheless, CI policy results are still better than those that the data-gathering policy yields.
Similar to all other predictive models used for prescriptive or decision-making purposes, feedback loops <cit.> risk deteriorating results: Implementing the CI policy will progressively shift the real-life data distribution away from the original training data, decaying the models' predictive accuracy. Frequent updates of the CI models would help but at the same time introduce new bias in the data (new data-gathering policy). However, with a sufficient degree of randomness in the decisions taken (as in RCTs and similar to exploration in RL), this iterative, in the limit online CI, approach would neutralize the feedback loops.
§.§.§ Reinforcement learning
RL has many theoretical advantages over CI. It does not require a prediction model and can rely on observed outcomes. RL is entirely generic: Theoretically, it can deal with any action width or depth as well as with continuous processes. Next best activity prediction, which represents the ultimate action width and depth, requires no change to the RL algorithms we used for timed process interventions. RL models are very flexible: Constraints, rewards, and penalties can be added at liberty to avoid detrimental or unacceptable actions, pursue secondary goals, etc. With online RL, agents can freely interact with their environment, and dependencies between processes can be taken into account if the processes are treated concurrently by one model. Exploration in online RL theoretically visits the complete state-space (all possible prefixes). Given sufficient exploration, online RL policies will automatically adapt to a changing environment (concept drift). Proven theorems even show that online Q-learning algorithms converge given enough time. Both online and offline RL, however, are known to be inefficient, requiring many transitions to converge to the optimal policy, as demonstrated by our experiments.
The max operator over the Q-values (see Figure <ref>) explains RL outperformance versus direct CI with equal data access. For every prefix, the learned Q values represent the expected outcomes for intervention and non-intervention respectively, assuming a (calculated) perfect policy after that, whereas the ITEs in CI represent the difference between the expected outcomes, each of which depends on the sample distribution from the data-gathering policy and the loss function. Note, however, that with an online CI approach (with real-time updating after every finished process observed) and allowing exploration, this data-gathering policy would converge to the optimal policy as well, thus practically obliterating the differences between CI and RL.
§.§.§ Real-world implementation
Despite its power and versatility, RL suffers from some important drawbacks. Yet, many of these are not entirely unique to RL but apply to CI and PresPM in general as well. The first such drawback is the risk of committing errors during real-time implementation. This implementation risk, however, can be reduced to that of the data-gathering policy (the de facto policy in place upon which the CI dataset is based) by inserting constraints into the RL algorithm that can easily deal with those. Rules mined earlier with a process discovery algorithm can frame the agent's actions as prescribed by <cit.>. Even human intuition can be inserted by allowing the human agent to overrule the RL algorithm's proposed action. In other words, implementing RL should not be riskier either than the original, existing policy or than implementing CI. The latter two policies occasionally make or propose costly mistakes too. If necessary, a two-stage offline-online approach can further reduce the risk: Offline RL based on simulations or predictive models can serve as an initialization to an online RL that then continues to learn acting in the real world, thereby closing the reality gap.
A similar argument can be made for the related challenge of reward specification. The desired outcome for a process to be optimized will not always be one-dimensional: The primary goal may be to reduce throughput time, however, without compromising employees' well-being and product quality. Moreover, such goals may shift over time or may need adjustment in the face of concept drift. Again, this challenge is not unique to RL, and exists regardless of the solution method, if any. When possible, these goals will be consolidated into one metric for use by both CI and RL. If not, RL can be extended to include constraints on undesired actions and/or rewards/penalties that promote secondary goals. As before, the human agent can also overrule the RL's algorithms suggestions.
RL is inefficient: It is data-hungry and slow to converge. Our experiments were based on relatively short and simple processes. Longer and more complicated processes (great action width/depth) will have an exponentially larger state space, suggesting that RL will no longer be a viable option where CI could still be. Yet, in deep RL, the Q-table is replaced by an NN, which to some extent obsoletes the need to visit the complete state space as unseen state-action (prefix-action) pairs can be interpolated. Working examples of this are video games with very large, and autonomous driving with near-infinite state spaces. The more similar regions the state space contains, the better this will work. Additionally, limiting the number of actions to the most relevant ones with causal discovery techniques (first CI component in <cit.>) may be a worthwhile investment before starting with RL (and CI as well). PM has an arsenal of causal discovery techniques that can be used to this end <cit.>.
Online RL implies working with event streams rather than event logs. Streaming is an active field of PM research. <cit.> identifies different types of incomplete cases in the observation window. This is not an issue for an online RL as it always starts from a case's beginning and updates itself after observing rewards for every new event it encounters (transition) until the case is complete.
The process independence assumption underlying both methods warrants caution when generalizing the results from our experiments. The larger the dependencies between processes and the larger the share of processes being optimized, the higher the risk of mutual process interference jeopardizing the expected results.
§ CONCLUSIONS AND FUTURE WORK
We conducted experiments on timed process interventions with synthetic data that render genuine online RL and the comparison to CI possible and allow for an accurate evaluation of the results. We showed how the theoretical problems burdening CI can be overcome by online RL, contingent upon the strong assumption of real-time implementation of the learned policies in the real world. In our experiments, online RL produced better and more robust policies than CI. In fact, RL nearly reached the theoretically optimal solution, which can be inferred because of the use of synthetic data. The used RL methods can also be applied without any modification to similar problems with greater action width and depth (next best activity prediction in the limit). When computational effort and/or the real-time implementation requirement preclude online RL, CI may be a viable alternative in scenarios where the dataset covers a large and evenly distributed share of the state space and action depth is limited.
With this work, we contributed to the nascent field of PresPM. We chose a simplified setting to gain some important insights. Reaching PresPM maturity will depend on exploring other, perhaps more sophisticated approaches, in ever more realistic settings. Further extensions of this work are, therefore, plentiful. First, an initial investigation of the merits of loss attenuation <cit.>, uncertainty <cit.>, and future individual intervention effects <cit.> revealed promising insights but should be corroborated. Future work could also shed light on the conditions under which RL remains efficient enough on realistic problems with sequences of multiple possible actions (greater action width and depth). Further complications could include the introduction of outcome noise, uncertain inputs, and concept drift. Since the rewards of processes often only happen (or become known) at their conclusion, MC learning (as in <cit.>) could be a faster alternative to the classical Q-learning we used. The FIFO principle for the online RL transition samples memory could be replaced by more sophisticated sampling techniques such as described in <cit.> for PredPM, or in <cit.> for experience replay in RL. Leveraging uncertainty estimates could be another option to improve sampling. RL does adapt to concept drift, but only very slowly. As a consequence, RL is not suited to deal with disruptions (e.g., caused by a pandemic). Digital twins for processes or organizations have been proposed as a solution <cit.> and are an avenue for future research. Instead of including the complete state space in the data for CI, as we did, it could be investigated to what extent CI would fall further behind online RL when the dataset only covers part of the state space (and contains selection bias caused by the data-gathering policy). For applications where online RL is not an option, more research on offline RL is recommended. Lifting the assumption of process independence would move the problem setting even closer to reality and would pose additional challenges: Process independence is a requirement to satisfy the stable unit treatment value assumption (SUTVA) <cit.> in CI. The combinatorial explosion caused by interdependent processes is challenging for RL as well and possibly demands additional heuristics (e.g., <cit.>). In the domain of CI, adaptations to the standard algorithms could lead to more capabilities in terms of action depth (possibly with a discounting mechanism as used in RL). Indirect CI's theoretical ability to handle sequences of actions could be weighed against the accuracy loss due to the compounding of two predictive models. Combating selection bias in processes (as in <cit.> for an environment without exogenous actors) beckons more research as well. Causal Reinforcement Learning <cit.> enriches RL with the first component of CI (causal relationship detection) by means of causal graphs. It requires either a priori causal graphs (which are rarely available in PresPM, or deriving them from the observational data under a set of assumptions (e.g. <cit.> and <cit.> for business processes). In our discussion about action width and depth (Subsection <ref>), we did not elaborate on how the decision points and the set of possible actions available to the agents at those points are elaborated. Next to human expertise, both PM and other methods should be reviewed from a PresPM perspective.
myplainnat
|
http://arxiv.org/abs/2306.08771v1
|
20230614223808
|
Interplay between numerical-relativity and black hole perturbation theory in the intermediate-mass-ratio regime
|
[
"Tousif Islam"
] |
gr-qc
|
[
"gr-qc"
] |
[email protected]
We investigate the interplay between numerical relativity (NR) and point-particle black hole perturbation theory (ppBHPT) for quasi-circular non-spinning binary black holes in the intermediate mass ratio regime: 7 ≤ q ≤ 128 (where q:=m_1/m_2 is the mass ratio of the binary with m_1 and m_2 being the mass of the primary and secondary black hole respectively).
Initially, we conduct a comprehensive comparison between the dominant (ℓ,m) = (2,2) mode of the gravitational radiation obtained from state-of-the-art NR simulations and ppBHPT waveforms along with waveforms generated from recently developed NR-informed ppBHPT surrogate model, . This surrogate model employs a simple but non-trivial rescaling technique known as the α-β scaling to effectively match ppBHPT waveforms to NR in the comparable mass ratio regime.
Subsequently, we analyze the amplitude and frequency differences between NR and ppBHPT waveforms to investigate the non-linearities, beyond adiabatic evolution, that are present during the merger stage of the binary evolution and propose fitting functions to describe these differences in terms of both the mass ratio and the symmetric mass ratio. Finally, we assess the performance of the α-β scaling technique in the intermediate mass ratio regime.
Interplay between numerical-relativity and black hole perturbation theory
in the intermediate-mass-ratio regime
Tousif Islam
July 31, 2023
================================================================================================================
§ INTRODUCTION
The detection and characterization of gravitational wave (GW) signals from binary black hole (BBH) mergers require computationally efficient yet accurate multi-modal waveform models. The development of such models relies heavily on accurate numerical simulations of BBH mergers. The most accurate way to simulate a BBH merger is by solving the Einstein equations using numerical relativity (NR). Over the past two decades, NR pipelines have been refined for BBH systems with comparable masses (1 ≤ q ≤ 10) <cit.>. The availability of a substantial number of NR simulations in the comparable mass ratio regime has facilitated the development of computationally efficient and accurate approximate models, such as reduced-order surrogate models based on NR data <cit.>, or semi-analytical models calibrated against NR simulations <cit.>.
On the other hand, extreme mass ratio binaries (i.e. q →∞) can, in principle, be modelled accurately with point particle black hole perturbation theory (ppBHPT) where the smaller black hole is treated as a point particle orbiting the larger black hole in a curved space-time background. Substantial progress has been made over the past two decades in simulating BBH mergers accurately in this regime <cit.>.
However, it is the intermediate mass ratio regime (10 ≤ q ≤ 100) that still presents significant challenges for performing accurate simulations of BBH mergers. NR simulations for binaries in this mass ratio range become exceedingly computationally expensive for a variety of reasons. On the other hand, as the binary becomes less asymmetric, the assumptions of the ppBHPT framework begin to break down. Therefore, the intermediate mass ratio regime provides a unique opportunity to compare and contrast results obtained from NR and ppBHPT framework. In particular, Refs. <cit.> studied this regime to gain insights into the limitations and accuracy of both approaches as well as to further the understanding about the dynamics of the binary.
Recently, a significant milestone has been reached with the development of the surrogate model <cit.>. This model, based on the ppBHPT framework, accurately predicts waveforms for comparable to large mass ratio binaries. Through a simple but non-trivial calibration process, the ppBHPT waveforms are rescaled to achieve a remarkable agreement with NR data in the comparable mass ratio regime.
In a parallel effort, Ref. <cit.> has developed a fully relativistic second-order self-force model, which also demonstrates excellent agreement with NR in the comparable mass ratio regime.
Additionally, recent advancements in NR techniques have pushed the boundaries of BBH simulations, enabling the simulations of BBH mergers with mass ratios up to q=128 for various spin configurations <cit.>. These new NR simulations provide valuable data that can be compared with results obtained from perturbative techniques such as the ppBHPT framework (including the surrogate model) and the second-order self-force model.
Building upon these recent advances, in this paper, we provide a detailed comparison between state-of-the-art NR simulations and perturbative results in the intermediate mass ratio regime.
We begin by providing an executive summary of the waveform data obtained from NR and point particle black hole perturbation theory (ppBHPT) in Section <ref>. In Section <ref>, we conduct a comprehensive comparison of the dominant (ℓ,m) = (2,2) mode of the waveforms. We examine the phenomenology of the amplitudes and frequencies of different modes in Section <ref> and discuss the differences in peak times of various spherical harmonic modes of the gravitational radiation in Section <ref>.
To understand the non-linearities during the merger stage, we analyze the amplitude differences between NR and ppBHPT waveforms and propose fitting functions to describe these differences in Section <ref>. Additionally, we evaluate the effectiveness of the α-β scaling technique in the intermediate mass ratio regime. We provide similar fits for the frequency differences in Section <ref>.
Finally, in Section <ref>, we discuss the implications and lessons learned for both NR and perturbative techniques.
§ GRAVITATIONAL WAVEFORMS IN THE INTERMEDIATE MASS RATIO REGIME
Gravitational radiation from the merger of a binary black hole is typically written as a superposition of -2 spin-weighted spherical harmonic modes with indices (ℓ,m):
h(t,θ,ϕ;λ) = ∑_ℓ=2^∞∑_m=-ℓ^ℓ h^ℓ m(t;λ) _-2Y_ℓ m(θ,ϕ) ,
where λ is the set of intrinsic parameters (such as the masses and spins of the binary) describing the system, θ is the polar angle, and ϕ is the azimuthal angle. In this paper, h(t,θ,ϕ;λ) is obtained from both NR simulations and different flavors of perturbation theory frameworks.
Numerical relativity data :
We utilize the latest NR simulations of high mass ratio binaries performed by the RIT group <cit.>. These simulations encompass mass ratios up to q ≤ 128 and spins ranging from -0.85 to 0.85. The NR waveforms obtained from these simulations include modes up to ℓ=6. However, due to numerical noise, we restrict our analysis to modes up to ℓ=4 only. Additionally, for the current study, we focus exclusively on non-spinning cases.
Perturbation theory waveforms :
We generate ppBHPT waveforms using the model <cit.>, a recently developed surrogate waveform model that combines numerical relativity (NR) information with perturbation theory. This model can be accessed through the <cit.> or the <cit.> package from the <cit.>.
The model is trained on waveform data generated by the ppBHPT framework for non-spinning binaries with mass ratios ranging from q=2.5 to q=10^4. The full inspiral-merger-ringdown (IMR) ppBHPT waveform training data is computed using a time-domain Teukolsky equation solver, which has been extensively described in the literature <cit.>. The model includes a total of 50 spherical harmonic modes up to ℓ=10.
The model calibrates ppBHPT waveforms to NR data in the comparable mass ratio regime (2.5 ≤ q ≤ 10) up to ℓ=5 employing a simple but non-trivial scaling called the α-β scaling <cit.>:
h^ℓ,m_ full, α_ℓ, β(t ; q) ∼α_ℓ h^ℓ,m_ pp( t β;q ) ,
where α_ℓ and β are determined by minimizing the L_2-norm between the NR and rescaled ppBHPT waveforms. After this α-β calibration step, the ppBHPT waveforms exhibit remarkable agreement with NR waveforms (with an error of ∼ 10^-3 for the (2,2 mode)). For instance, when compared to recent SXS and RIT NR simulations with mass ratios ranging from q=15 to q=32, the dominant quadrupolar mode of agrees to NR with errors smaller than ≈ 10^-3.
Using <cit.>, we then generate both ppBHPT and rescaled ppBHPT waveforms for any mass ratio within the training range of the model.
§ COMPARISON BETWEEN NR AND PERTURBATION WAVEFORMS
Currently available high mass ratio NR simulations are of varying lengths, often spanning only 1500M (where M is the total mass of the binary). This limited duration frequently poses a challenge when conducting a detailed comparison with existing waveform models. Additionally, many of the high mass ratio simulations exhibit residual eccentricity, further complicating waveform-level comparisons. Nonetheless, in Ref. <cit.>, an interesting comparison is presented between RIT NR data and the waveform model for mass ratios q=[15,32]. While a comprehensive comparison of the full inspiral-merger-ringdown waveform is challenging due to the residual eccentricity in these simulations, they can still be utilized to comprehend and compare waveform phenomenology during the merger-ringdown stage, where the binary significantly circularizes. Hence, this paper primarily focuses on comparing the phenomenology of the NR data with the waveforms obtained from perturbation theory models.
§.§ Comparison of (ℓ,m)=(2,2) mode waveforms
To begin, we decompose each spherical harmonics mode h^ℓ m(t) into its amplitude A^ℓ m(t) and phase ϕ^ℓ m components, represented as h^ℓ m(t) = A^ℓ m(t) e^iϕ^ℓ m.
For simplicity, we first focus on comparing the dominant (ℓ,m)=(2,2) mode during the final ∼ 1000M of the binary's evolution (see Fig. <ref>). To facilitate this comparison, we align the multi-modal NR data (shown as solid black lines; labelled as `RIT-NR'), ppBHPT waveforms (shown as solid yellow lines; labelled as `BHPT'), and rescaled ppBHPT waveforms (represented by dashed red lines; labelled as `BHPTNRSur1dq1e4') on the same time grid t=[-1000,100]M, where t=0M corresponds to the peak of the (ℓ,m)=(2,2) mode amplitude. Additionally, we adjust the phases such that the orbital phase is zero at the beginning of the waveforms i.e. at t=-1000M.
We observe that the rescaled ppBHPT waveforms exhibit a close match to the NR data for mass ratios ranging from q=7 to q=32, while the ppBHPT waveforms display differences in both amplitude and phase evolution when compared to NR data (top four rows of Fig. <ref>). However, for mass ratios q≥64, the NR data shows notable eccentricities, resulting in significant dephasing between the ppBHPT waveforms and NR, as well as between the rescaled ppBHPT waveforms and NR (bottom three rows of Fig. <ref>).
Furthermore, it is important to mention that the ppBHPT and rescaled ppBHPT waveforms become increasingly similar to each other for mass ratios q≥64. This suggests that the higher-order corrections to the linear ppBHPT results are relatively small in this regime.
In order to analyze the discrepancies between these waveforms, we calculate the relative differences in amplitude Δ A_22/A_22^ NR and the absolute differences in phase Δϕ_22 for both ppBHPT and rescaled ppBHPT waveforms compared to the NR data. Figure <ref> illustrates the errors in amplitudes and phases during the late inspiral-merger-ringdown phase of the waveforms.
For mass ratios in the range of q=7 to q=16, it is clear that the differences in both amplitudes and phases between the rescaled ppBHPT waveforms and the NR waveforms are significantly smaller than those observed between the ppBHPT waveforms and NR. This suggests that the linear ppBHPT waveforms are insufficient in accurately matching the NR waveforms within this mass ratio range.
However, as we move towards higher mass ratios (i.e. q≥32), the differences in Δ A_22/A_22^ NR and Δϕ_22 between the ppBHPT and rescaled ppBHPT waveforms diminish gradually. This indicates that the linear description of the binary evolution becomes increasingly accurate as the mass ratio increases.
For mass ratios q≥64, both Δ A_22/A_22^ NR and Δϕ_22 exhibit distinct features that strongly suggest the presence of residual eccentricities in the NR simulations.
§.§ Comparison of the amplitudes and frequencies of different modes
We now examine the amplitudes and instantaneous frequencies of three representative modes [(ℓ,m)]=[(2,2),(3,3),(4,4)] for mass ratios ranging from q=7 to q=128 (see Fig. <ref>). For any given mode, instantaneous frequencies ω_ℓ,m is given by the time derivative of the phase
ω_ℓ,m = dϕ_ℓ,m/dt.
To mitigate the impact of residual eccentricities in the comparisons, we focus on the merger-ringdown stage of the binary (-100M ≤ t ≤ 100M), where circularization is expected to be nearly complete.
For mass ratios 7 ≤ q ≤ 32, noticeable differences are observed between ppBHPT and NR amplitudes, while the rescaled ppBHPT amplitudes closely match the NR values across all mass ratios. Moreover, as anticipated, the differences in amplitudes between ppBHPT and NR (and rescaled ppBHPT) decrease as the mass ratio increases. For q≥ 64, ppBHPT and rescaled ppBHPT produce nearly identical amplitudes.
Interestingly, the frequencies of the individual modes computed from ppBHPT waveforms and NR exhibit remarkable agreements for all mass ratios. It is important to note that due to numerical noise in the NR data, frequencies display unphysical oscillations after the merger, particularly for mass ratios q≥15.
§.§ Comparison of the peak times
Next, we determine the times t_ℓ,m^ peak corresponding to the maximum amplitude A^ peakℓ,m for each spherical harmonic mode. We then calculate the relative time of the peaks with respect to the dominant (2,2) mode as:
δ t^ peakℓ,m = t_ℓ,m^ peak - t_2,2^ peak,
where t_2,2^ peak is the time at which the (2,2) mode amplitude reaches its maximum. We show the relative peak times δ t^ peak_ℓ,m in the NR data for a set of three representative modes [(ℓ,m)]=[(2,1),(3,3),(4,4)] along with the relative peak times for the same modes in the ppBHPT and rescaled ppBHPT waveforms in Fig. <ref>.
For comparison, we include the relative peak times of these modes from one of the state-of-the-art effective-one-body models for aligned-spin binaries, namely . This model includes four higher-order modes in addition to the dominant quadrupolar mode of radiation: (ℓ,m)=[(2,±1),(3,±3),(4,±4),(5,±5)], and it is calibrated to a set of 141 NR waveforms for mass ratios q≤10 and spins χ_1,2≤ 0.99.
Interestingly, the relative peak times δ t^ peak_ℓ,m within these waveforms exhibit significant inconsistencies with each other for almost all mass ratio values.
The inconsistencies in the relative peak times δ t^ peak_ℓ,m indicate that there is still room for improvement in accurately predicting the timing of different modes during the merger-ringdown phases of binary black hole systems. Further developments in waveform modeling techniques and more comprehensive calibration against NR simulations may help reduce the discrepancies.
We further notice that the differences in peak times between ppBHPT and rescaled ppBHPT waveforms are very small. This can be attributed to the dominant influence of the inspiral phase in the α-β calibration procedure. Accurate modelling of the peak times in rescaled ppBHPT waveforms (i.e. in ) may require further tuning in the merger-ringdown part as done in Ref. <cit.>.
§ INTERPLAY BETWEEN NR AND PERTURBATION THEORY
To gain a deeper understanding of the interaction between the NR and ppBHPT waveforms, we now examine their disparities in terms of amplitude and frequencies (as illustrated in Figure <ref>) across different mass ratios. It should be noted that the rescaled ppBHPT waveforms are not utilized in this particular study.
§.§ Amplitude differences
We first investigate the differences between NR and ppBHPT in amplitude across various mass ratios. Specifically, we replicate and expand upon the analysis presented in Refs. <cit.>. Following the methodology outlined in Refs. <cit.>, we define the amplitude differences as:
δ A_ℓ,m = | A_ℓ,m^BHPT - A_ℓ,m^NR |,
where A_ℓ,m^BHPT represents the amplitude of the ppBHPT waveform.
We observe that the amplitude differences for the q=10 and q=15 cases near the merger exhibit the following behavior (Fig. <ref>):
δ A_22^q=10 ∼ 1.92 ×δ A_22^q=15
∼ 1.44^1.92×δ A_22^q=15,
where 1.44 is to the ratio of the symmetric mass ratios ν. This approximate scaling differs slightly from the one reported in Ref. <cit.>, which suggested δ A_22^q=10∼ 1.44^2.3×δ A_22^q=15. Nevertheless, both results indicate the presence of nonlinear effects (beyond adiabatic evolution) in the amplitude differences between the NR and ppBHPT waveforms, as these differences scale nonlinearly with the symmetric mass ratio ν.
Likewise, we find that the amplitude differences for the q=10 and q=32 cases near the merger can be characterized as follows:
δ A_22^q=10 ∼ 7.7 ×δ A_22^q=32
∼ 2.81^1.98×δ A_22^q=32,
where 2.81 is the ratio of the symmetric mass ratios ν.
Similarly, the amplitude differences for the q=15 and q=32 cases near the merger obeys:
δ A_22^q=10 ∼ 3.97 ×δ A_22^q=32
∼ 1.99^1.96×δ A_22^q=32,
where 1.99 is the ratio of the symmetric mass ratios ν.
Next, we perform fitting for the amplitude differences δ A_ℓ,m of three representative modes (ℓ,m)=[(2,2),(3,3),(4,4)] at their respective peaks as a function of ν (Fig. <ref>). The obtained relations are as follows:
δ A_2,2∼ 6.07 ×ν^3.06
δ A_3,3∼ 1.53 ×ν^2.90
δ A_4,4∼ 0.43 ×ν^2.84.
Next, we repeat the fitting in terms of 1/q ((Fig. <ref>)) and find:
δ A_2,2∼ 1.86 / q^2.81
δ A_3,3∼ 0.51 / q^2.66
δ A_4,4∼ 0.15 / q^2.61.
These fits not only provide a simple scaling for the differences in maximum amplitudes between ppBHPT and NR waveforms, but also serve as further confirmation of the presence of non-linearity during the merger stage of the binary evolution. Additionally, we observe that the non-linearity is more pronounced in the (2,2) mode compared to higher order modes.
Finally, we calculate A_ NR/A_ BHPT, which represents the ratio of the ppBHPT and NR amplitudes for all mass ratios. This ratio is expected to correspond roughly to the α parameter in Eq. (<ref>) after multiplying by the transformation factor 1/1+1/q between a mass scale of m_1 and M. In Figure <ref>, we present both the ratio of the amplitude A_ NR/A_ BHPT and the α values obtained from the model. We observe that as the mass ratio increases, the agreement between these two quantities improves, suggesting that the α-β scaling works reasonably well even beyond the comparable mass ratio regime where it was originally constructed. The differences observed for q≤ 15 can be attributed to numerical noise and the presence of residual eccentricities in the NR data.
§.§ Frequency differences
Following the methodology described in Section <ref> regarding the amplitudes, we define the frequency differences as:
δω_ℓ,m = |ω_ℓ,m^BHPT - ω_ℓ,m^NR|,
where ω_ℓ,m^BHPT and ω_ℓ,m^NR represent the instantaneous frequencies of the ppBHPT and NR waveforms, respectively.
We calculate δω_ℓ,m at the merger, indicated by the maximum amplitude in the (2,2) mode, for the (2,2), (3,3) and (4,4) modes for mass ratios q=[7,15,32,64,128] (Fig. <ref>). Subsequently, we conduct a fitting analysis for the frequency differences δ A_2,2 in terms of 1/q and obtain the following relationship (Fig. <ref>):
δω_2,2∼ 0.047 / q^0.73.
Next, we repeat the fitting in terms of ν and find:
δω_2,2∼ 0.063 ν^0.78.
It is important to acknowledge that due to numerical noise present in the NR data, as observed in Figure <ref>, it becomes increasingly difficult to obtain accurate estimates of the instantaneous frequencies from NR for mass ratios q≥16. Therefore, we refrain from attempting to fit the frequency differences for the (3,3) and (4,4) modes in this scenario.
§ DISCUSSIONS & CONCLUSION
In this work, we have conducted a detailed comparison between state-of-art NR simulations and perturbative results in the intermediate mass ratio regime. In particular, we use both ppBHPT waveforms and rescaled ppBHPT waveforms from the surrogate model.
We first provide a comprehensive comparison of the dominant (ℓ,m)=(2,2) mode of the gravitational radiation obtained from NR and ppBHPT techniques. We observe that the rescaled ppBHPT waveforms exhibit a close match to the NR data for mass ratios ranging from q=7 to q=32, while the ppBHPT waveforms display differences in both amplitude and phase evolution when compared to NR data. For mass ratios q≥32, residual eccentricities and numerical noise in the NR data make such comparisons challenging (Section <ref>; Fig. <ref> and Fig. <ref>). We further observe that as the mass ratio increases, the differences between NR data and ppBHPT results reduce (Section <ref>; Fig. <ref>).
Furthermore, excellent match between NR amplitudes and scaled ppBHPT amplitudes indicate effectiveness of the α-β scaling in the intermediate mass ratio regime (Section <ref>; Fig. <ref>).
However, the differences in peak times of different modes between NR, ppBHPT and highlight the intricacies of the merger stage, revealing insights into the non-linear dynamics of the binary evolution (Section <ref>; Fig. <ref>).
Next, we examine the disparities between NR and ppBHPT waveforms in terms of amplitude and frequencies to gain a comprehensive understanding of the intricate relationship between these two frameworks. We analyze the amplitude differences δ A_ℓ,m between NR and ppBHPT waveforms for different modes to investigate the non-linearities present during the merger stage of the binary evolution and propose fitting functions to describe these amplitude differences in terms of both q and ν. The proposed fitting functions for amplitude differences between NR and ppBHPT waveforms offer a valuable tool for understanding and quantifying these non-linearities (Section <ref>; Figs. <ref>, <ref>, <ref>). Finally, we provide similar fits for the frequency differences in the (2,2) mode in Section <ref>.
This study highlights the potential of ppBHPT and surrogate models, such as , in efficiently and accurately predicting waveforms in the intermediate mass ratio regime. It opens up new opportunities for exploring the non-linearities during the merger stage of binary and for developing reliable modeling strategies to accurately determine the peak times of each mode. Our findings underscore the importance of improving calibration methods for ppBHPT-based surrogate models and enhancing eccentricity reduction algorithms in NR simulations. These advancements will contribute to the development of more accurate and efficient waveform models, enabling better detection and characterization of GW signals in the intermediate mass ratio regime.
T.I. would like to thank Gaurav Khanna and Scott Field for helpful discussion. T.I. is supported by NSF Grants No. PHY-1806665 and DMS-1912716. This work is performed on CARNiE at the Center for Scientific Computing and Visualization Research (CSCVR) of UMassD, which is supported by the ONR/DURIP Grant No. N00014181255, the UMass-URI UNITY supercomputer supported by the Massachusetts Green High Performance Computing Center (MGHPCC) and ORNL SUMMIT under allocation AST166.
|
http://arxiv.org/abs/2306.03575v1
|
20230606105027
|
Quantifying physical insights cooperatively with exhaustive search for Bayesian spectroscopy of X-ray photoelectron spectra
|
[
"Hiroyuki Kumazoe",
"Kazunori Iwamitsu",
"Masaki Imamura",
"Kazutoshi Takahashi",
"Yoh-ichi Mototake",
"Masato Okada",
"Ichiro Akai"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
Entropic covariance models
Piotr Zwiernik[label=e2][email protected]
July 31, 2023
=========================================================================
empty
§ INTRODUCTION
X-ray core-level photoemission spectroscopy (XPS) is a popular and
powerful tool for investigating the elemental composition of
materials <cit.>. Especially,
due to the short escape depth of photoelectrons excited by soft
X-rays, XPS has been applied to various surface and interface
analysis such as film thickness <cit.>,
chemical states at surfaces or
interfaces <cit.>,
and atomic distortion at the interface <cit.>.
The elemental information measured by XPS is revealed by line-shape
analysis for the obtained spectrum. Although the regression analysis
of XPS spectra is a non-linear regression, the least squares method has been
used to minimize the fitting error until now. With the least-squares
method, it is difficult to incorporate known physical property
information and evaluating the accuracy of the estimate is impossible.
In addition, it is not possible to obtain statistical guarantees for
the solution because the solution obtained depends on the initial
search values. In this paper, to solve these problems, we apply
Bayesian spectroscopy to analyze the XPS spectra of one monolayer (1ML), two
monolayers (2ML), and quasi-freestanding 2ML (qfs-2ML) graphene layers
grown
on SiC substrates and attempt to extract changes in the
chemical states of the graphene layers by chemical modification at the
interface.
High-quality, large-scale graphene can be formed by thermal decomposition
of SiC at elevated temperatures. It is known that buffer layer
are
formed between graphene and the SiC substrate and have an equivalent
structure to graphene, although the buffer layer
adhere strongly to
the SiC substrate, while dangling bonds remain with the SiC substrate due
to lattice mismatch <cit.>. Riedl et al.
proposed that graphene C 1s spectra have four peaks, in addition to SiC and
graphene (Gr), two additional components S1 and
S2 <cit.>.
S1 comes from the C atoms bound to one Si atom on the surface of SiC(0001)
and to three C atoms in the buffer layer. S2 comes from the remaining
sp^2-bonded C atoms in the buffer layer.
The carbon layer on this buffer layer
exhibits the properties of graphene. To modify the bonding
at the interface, various atoms, such as
H <cit.>,
O <cit.>,
Ge <cit.>,
Si <cit.>,
Au <cit.>
and Bi <cit.> have been intercalated between
the buffer layer and SiC. When atoms are intercalated beneath the buffer
layer, dangling bonds of Si are terminated by intercalated atoms to break
covalent bonds between the buffer layer and SiC. Since the charge transfer
into the graphene layer is also modified by the intercalation, the charge
neutrality level can be controlled around the Dirac point artificially.
However, the transport properties are often degraded after the interface
modification. To enhance the transport property of graphene, precise
control and characterization of the chemical states should be performed at
the intercalated interface. The bonding at the modified interface would
differ depending on the intercalated atoms and on the treatment conditions.
In such cases, the reliable structure model for the intercalated interface
would often be lacking. Thus, an alternative method is strongly required
to decompose core-level photoemission spectra even in the case where a
reliable structure model or a presumable number of components is not
known. However, we have been forced to analyze the lineshape with the
number of components and the constraint parameters assumed to validate a
plausible structure and the experimental setting. Thus, there might be a
concern of containing subjective and empirical arbitrariness in the fitting
results in the XPS spectra. In addition, if the line-shape analysis was
treated as a black-box tool due to its complexity, that would yield
incorrect results. Therefore, a reproducible and reliable approach without
arbitrariness is required for XPS spectral analysis.
Recently, the result of Bayesian spectral deconvolution for core-level XPS
has been reported to realize automatic analysis of core-level XPS spectra
by incorporating the effective Hamiltonian into the stochastic model of
spectral deconvolution <cit.>. 3d core-level XPS
spectra of La_2O_3 and CeO_2 were well
reproduced and it was confirmed that the effective Hamiltonians selected by
model selection were in good agreement with the results obtained from a
conventional study. Furthermore, the uncertainty of its estimated values,
which are difficult to obtain with the conventional analysis method, and
the reason why the effective Hamiltonian selected were also revealed by
spectral deconvolution based on Bayesian
inference <cit.>.
However, the difficulty in performing such Bayesian inference is
in designing
the prior probability distribution. Since prior probability
distributions can restrict the range of parameters, if the constrain
condition for their parameters is known in advance, it can be incorporated
into the prior probability distribution. Otherwise, a distribution
that does not affect parameter estimation, such as a wide uniform distribution,
is used as the prior probability distribution. In that case, the
posterior probability distribution may exhibit multimodality due to exchange
among spectral components, which results in poor parameter estimation
accuracy.
However, high precision spectral analysis can be achieved even
when scientists have no prior knowledge of the data by the following
scenario.
They first analyze the spectral data without assuming prior knowledge.
By reviewing the results of this analysis, they quantify physical constraints that were previously unnoticed or unquantifiable.
Utilizing such knowledge, for example, by constraining the range of regression parameters, they can achieve the analysis of spectral data with the desired accuracy.
The purpose of this study is to propose a framework for incorporating this natural flow of spectral data analysis conducted by scientists into Bayesian spectroscopy <cit.>.
In this article, core-level spectra in both pristine and oxygen-intercalated
graphenes grown in SiC(0001) have been analyzed by Bayesian
spectroscopy
with constraints on the values of the spectral parameters based on knowledge
of the physical properties.
§ SAMPLES
Graphene layers were grown on n-doped 6H-SiC(0001) using
the face-to-face method <cit.>,
where two SiC substrates were placed one on top of the other with
a gap of 20 µm using Ta foils. After sufficient outgassing
at approximately 800 and annealing at
1200 to provide a well-ordered Si-terminated surface,
samples were annealed at 1350 and
1400 to obtain 1ML and 2ML graphene, respectively.
Qfs-2ML were obtained by annealing the 1ML sample for 10 min at
550 in air. All measurements were performed on the
beamline BL13 at the SAGA Light
Source <cit.>.
The core-level and valence-band photoemission spectra were measured using
a photon energy of 680 and 40 eV, respectively. The Fermi energy and
the energy resolution were confirmed by measurements for the Fermi
level of the gold reference. The overall energy resolutions were
estimated to be 0.69 and 0.04 eV for core-level and valence band
measurements, respectively.
Figure <ref> shows the XPS spectra measured
for 1ML, 2ML, and qfs-2ML graphene samples.
Vertical dashed lines
indicate the energy positions reported in a previous
work <cit.> for the 1ML and 2ML samples, which
are shown in red for the SiC substrate, blue for graphene (Gr),
green for S1 and magenta for S2, respectively. Although the peak
positions are shifted about 0.2 eV comparing the vertical dashed
lines with the peak positions observed in Fig. <ref>,
this energy shift is considered to be due to differences in
measurement conditions such as temperature and SiC doping
concentration because the binding energy scale was calibrated with
respect to the Fermi level of the gold reference.
In the 1ML sample depicted in Fig. <ref>(a), the SiC
substrate (283.70 eV <cit.>) gives the strongest
peak at 283.8 eV, and the peak structures of the graphene
(284.67 eV <cit.>) and of the two components
S1 (285.04 eV <cit.>) and S2
(285.53 eV <cit.>) for buffer layer atoms
are observed
as broad peaks from 284.5 to 286.0 eV without separation. In the 2ML
sample shown in Fig. <ref>(b), the SiC substrate
(283.66 eV <cit.>) gives a second intense peak at
283.3 eV, which is the almost same as that of 1ML. However, the
spectral structure in the high-binding energy region changes markedly,
giving a dominant peak at 284.8 eV. Although this peak is considered
to be chiefly attributed to graphene (284.56 eV <cit.>),
and also includes the components of the buffer layer
S1 (285.01 eV <cit.>) and S2
(285.50 eV <cit.>) because it has a shoulder structure
on the high energy side.
The buffer layer at the interface
between SiC and graphenes consists of a carbon layer
in a graphene-like honeycomb arrangement that bonds covalently to the
Si-terminated substrate partially. For the 1ML and 2ML samples, not all
Si atoms can bond to carbon atoms due to the different lattice constants
between SiC and graphene and due to the 30^∘ rotation angle of
the carbon layer relative to the SiC substrate. Although the covalent
bond breaks the hexagonal network of π orbitals but preserves the
σ-bonds <cit.>,
it is not known whether the Si dangling bonds remain in the sample
qfs-2ML after annealing. The XPS spectrum in the qfs-2ML sample is similar to
that in the 2ML sample, as seen in Fig. <ref>, in which they
have two main peaks and a shoulder in the higher energy
peak. Thus, it is considered that the two main peaks at 283.1 and
284.0 eV come from the SiC substrate and graphene. However, in the
qfs-2ML sample, it is controversial that there are both buffer layer components
(S1 and S2) on the shoulder of the 284.0 eV peak. Furthermore, one can
find that the peak positions for the qfs-2ML sample are shifted to the
lower binding energy side by about 0.7 eV in Fig. <ref>,
although the energy axis was calibrated, and the physical reason for
this shift remains elusive.
§ FRAMEWORK
Figure <ref> is the schematic diagram of our proposed
framework. To realize the spectral decomposition with high
precision, we performed the following two-step analysis. First,
we performed an exhaustive search using uniform distributions
for the prior probability of parameters
θ_K on the respective spectral components of
Gr, S1, S2, and SiC in order to explore θ-space.
We obtained posterior distributions of θ_K
which exhibit that the actual
θ-space might be occupied by parameters.
Here, we design the prior probability distribution on
the basis of the posterior distributions obtained by the
exhaustive search for the analysis in the next step.
When prior information is available, we can incorporate it into
prior probability distributions.
Next, we analyzed the target data, D using
the designed prior probability distribution for estimating
θ_K.
In Fig. <ref>, the XPS spectrum D contains three peaks
p_i (i = 1, 2, 3) <cit.>.
However, applying uniform distribution of the prior probability of
energy E, the width of the red component is large
and the obtained solution is not reasonable as a result of XPS
as shown in the left of Fig. <ref>,
and the precise estimation of the binding
energy E is inhibited because the posterior distribution of the
red component obtained by the exhaustive search becomes
two-modal and one distributed near the blue component as shown
in Fig. <ref>. Here, we design the
prior probability distribution of E.
When we have constrain conditions for E: each
energy difference is greater than Δ E (> 0),
we can incorporate the physical property that the red
component is not located around the blue component.
Thus, we can configure prior probability as demonstrated in
Fig. <ref>, and obtain reasonable solutions
with high precision.
§.§ Bayesian spectroscopy
Bayesian spectroscopy is a spectral decomposition analysis method
that incorporates a Bayesian inference framework. Let
D = {
(x_i, y_i) |i = 1, ⋯, N
}
be a data set of an XPS spectrum and f_K(x_i; θ_K)
be a phenomenological model function to describe D,
where K is a subscript for model identification. Based on D,
Bayesian spectroscopy evaluates the posterior probability distributions
of the material-specific parameters θ_K in the model
function to be estimated.
From Bayes' theorem <cit.>, the
posterior probability distribution P(θ_K|D,b,K)
is given by Eq. (<ref>).
P(θ_K|D,b,K)
=
P(D|θ_K,b,K)
P(θ_K|b,K)
/
P(D|b,K)
,
where b is the quasi-inverse temperature <cit.>
defined as an inverse variance b = σ_noise^-2 with a
standard deviation σ_noise of the superimposed noise in
y={y_i|i=1,⋯,N}. When the noises in y
are distributed independently in i according to a normal distribution
with zero mean and variance b^-1, the likelihood term
P(D|θ_K,b,K) is given by
P(D|θ_K, b)
=
(b/2π)^N/2
exp[
- b N ℰ_K(θ_K)
]
with an error function ℰ_K(θ_K)
defined in Eq. (<ref>).
ℰ_K(θ_K)
=
1/2N∑_i=1^N[
y_i
-
f_K(x_i; θ_K)
]^2
.
A Bayesian partition function
Z(b,K) <cit.> is obtained by
marginalizing the numerator of Eq. (<ref>)
over θ_K:
Z(b,K)
≡P(D|b,K)
=
(
b/2π
)^N/2
∫exp[
- b N ℰ_K(θ_K)
]
P(θ_K|b,K)
d θ_K
,
and a Bayesian free energy <cit.>
(BFE) is defined as F(b,K)=-lnZ(b,K). By minimizing
F(b,K), the estimation of the noise intensity
σ̂_noise superimposed on the measured
data D and the model selection of the most
appropriate function
f_K̂(x_i;θ_K̂) to explain
D can be achieved simultaneously by
Eq. (<ref>).
{b̂, K̂}
=
_b, K F(b, K)
,
σ̂_noise
=
b̂^-1/2.
The posterior probability distribution of material-specific
parameters θ_K̂ is sampled using a
replica exchange Monte Carlo (RXMC) <cit.>
method according to Eq. (<ref>).
P(θ_K̂|D,b̂,K̂)
∝exp[
- b̂ N ℰ_K̂(θ_K̂)
]
P(θ_K̂|b̂,K̂)
,
and the maximum a posteriori probability (MAP) estimate in
Eq. (<ref>) is used for the optimal parameters
θ̂_K̂ to explain the measured data
D.
θ̂_K̂
=
_θ_K̂
P(θ_K̂|D,b̂,K̂)
.
In spectral decomposition, when there is no prior knowledge of
material-specific parameters, its prior probability
P(θ_K|b,K) in Eq. (<ref>)
should not be restricted. However, we have to search the
high-dimensional parameter space extensively, and
rejections <cit.> of candidate
parameters prepared in Monte Carlo steps and
exchange <cit.> of spectral components
during sampling become frequent, making sampling convergence
difficult. On the other hand, when one wants to decompose spectra
of specific materials and quantitatively evaluate changes in
physical properties associated with changes in the material
interface, as is the case in this paper, we can make positive
efforts to incorporate the knowledge of material properties into
the prior probabilities in Bayesian spectroscopy.
§.§ Phenomenological model for the XPS spectrum
A phenomenological model f_K(x_i;θ_K) in
Eq. (<ref>) is used for the spectral
decomposition, which is a sum of peaks with a pseudo-Voigt
function <cit.> and the Shirley
background signal <cit.>:
f_K(x_i; θ_K)
=
∑_k=1^K
p(x_i;θ_k^peak)
+
h/C∫_-∞^x_i∑_k=1^K
p(ξ;θ_k^peak)
dξ
,
where K is the number of peaks in the XPS spectrum and
θ_K = {
θ_1^peak,
⋯,
θ_K^peak,
h
}
.
p(x; θ^peak) is a pseudo-Voigt
function in Eq. (<ref>), which is a
linear combination of Lorentzian L(x) and Gaussian G(x)
shapes, with their intensity A, binding energy E,
spectral width w at full width at half maximum (FWHM)
and a mixing ratio η.
p(x; A, E, w, η)
=
η· L(x; A, E, w)
+
(1 - η) · G(x; A, E, w)
,
L(x; A, E, w)
=
A
2/πw/4(x - E)^2 + w^2
,
G(x; A, E, w)
=
A
√(4 ln 2/π w^2)exp[
- 4 ln 2 (
x - E/w)^2
]
.
C in Eq. (<ref>) is a normalization
constant for the Shirley background given by
∑_k=1^K A_k and h is the height of the background
signal in x_i →∞ where the intensity of the peaks
must be zero and only the background signal remains.
§.§ Computational details
For RXMC sampling, we prepared 100 replicas with quasi-inverse
temperatures b_ℓ, b_1 = 0 and a geometric sequence
b_ℓ for 2≤ℓ≤100 with b_2=10^-4 and
b_100=10. In all analyses in this paper, the b̂
obtained in Eq. (<ref>) are 0.75 – 1.53,
which falls within this b_ℓ range. On the other hand,
b_2 should be chosen so that the state exchange of the set
of parameters θ_K between the replica at
b_1 (=0) is guaranteed. In this study, we set a
sufficiently small b_2 and confirmed that the average
exchange ratios between these replicas are more than 90 %
in all analyses.
RXMC sampling was carried out in 1,000,000 steps after a
sufficient burn-in phase of 600,000 steps. We used the
auto-tuning algorithm <cit.>
for the step widths to achieve the mean acceptance ratio of
70 %.
§ INCORPORATION OF PHYSICAL PROPERTIES INTO PRIOR PROBABILITY
In addition to model selection <cit.>,
the advantage of Bayesian spectroscopy is that
appropriate knowledge of physical properties can be
incorporated into the prior probability
P(θ_K|b,K) in Eq. (<ref>).
In XPS, broad peaks sometimes arise from multiple
components. Therefore, to perform a well-founded physical
analysis with high precision, the restriction in the
prior probability P(θ_K|b,K) for
θ_K is especially effective.
§.§ Prior probability for binding energy
To associate each spectral component with each physical
origin while suppressing component exchange during RXMC
sampling, we set different prior probabilities for the
binding energies E_k to distinguish the respective
components (k=1:SiC, 2:Gr, 3:S1, 4:S2).
To accomplish this task by merging the results of the
exhaustive search and the knowledge of the previous
study, we divide the posterior probability distributions
of E_k obtained by the exhaustive search into
four monomodal ones with reference to the previous
study <cit.> and evaluate the means
μ_k and standard deviations σ_k of the
respective monomodal posterior probability distributions.
For the prior probability of E_k, we use normal
distributions 𝒩(E_k;m_k,s_k) in
Table <ref>, where m_k and s_k are
their mean and standard deviation and are determined as
m_k=μ_k and s_k=5σ_k. Although prior
probabilities are used here with standard deviations
larger than those evaluated in the exhaustive search,
this setting avoids imposing excessive restrictions and
allows the search for the parameter space of E_k
explored in the exhaustive search.
For 1ML and 2ML samples, we use the same prior
probabilities in the respective components of SiC, Gr,
and S1 as seen in Table <ref> since the
binding energies of these components are approximately
equal in 1ML and 2ML samples, as confirmed in
Figs. <ref>(a) and (b). In the case of the
2ML sample in Fig. <ref>(b), the S2
component appears as a shoulder structure associated
with the strong components S1 and Gr. So, although
there is a previous study <cit.>
showing that the binding energy of the S2 component
does not differ significantly between the 1ML and 2ML
samples, in the 2ML sample, the prior probability of the
binding energy for the S2 component is designed as
follows: we consider the difference ΔE
(ΔE=E_S2-E_S1) in the binding
energy of the S2 component from the S1 component and
introduce a prior probability of a normal distribution
for ΔE as shown in Table <ref>, where
the mean and standard deviation are
ΔE_1ML of the 1ML sample and 0.07 eV,
respectively.
In the case of the qfs-2ML sample, although the entire
XPS spectrum in Fig. <ref>(c) shifts to the
lower energy side than those of the 1ML and 2ML samples,
we can determine the prior probabilities of the binding
energies for the SiC and Gr components as seen in
Table <ref> based on the exhaustive search
in the same manner. However, the posterior probability
distributions of the binding energies for S1 and S2
obtained in the exhaustive search are broad,
and whether the S1 component remains in the qfs-2ML
sample is controversial. Therefore, we prepare a prior
probability of the normal distribution for S1 and S2
that has the same mean value as the S1 component in
the 1ML and 2ML samples as shown in Table <ref>,
and a large standard deviation of 0.20 eV is used to
detect the energy shift of the components S1 and S2 in
the qfs-2ML sample.
§.§ Prior probabilities of other parameters
In the exhaustive search, the posterior distributions
of the spectral width w are distributed in the range
of less than about 4 eV and had a mode at about 0.9 eV.
Thus, we use the same gamma distribution;
𝒢(x;α,β)
=
β^α
x^α- 1
e^- βx
/Γ(α)
for the prior probability distribution of the nonnegative
w in all components of all data, where Γ(α)
is a gamma function, α=2.138 and
β=1.265 eV^-1, respectively. Therefore, the
mode value of 𝒢(x;α,β) is 0.9 eV and
the range of two standard deviations from the mean covers
4 eV. Although, to decompose the shoulder structure in
the XPS spectrum of the 2ML sample [see
Fig. <ref>(b)] into two components, the binding
energy of S2 was parameterized by ΔE
, we also assume that
S1 and S2 have the same spectral width
(w_S1 = w_S2) in the 2ML sample,
since these two buffer components are expected to have
similar lineshapes.
For other parameters in Eq. (<ref>),
we set A>0, 0≤η≤1, h>0 as prior probabilities
using uniform distributions.
§ RESULTS
§.§ 1ML and 2ML samples
Table <ref> summarizes the results
of Bayesian spectroscopy for the 1ML sample,
where θ̂ and σ_θ
are the MAP estimates and the standard deviations
of P(θ|D, b̂) as measures
of the precision of the estimation.
The colored and black curves in
Fig. <ref>(a) are the spectral
components decomposed and a regression spectrum
by Eq. (<ref>), and Bayesian
spectroscopy can successfully decompose the XPS
spectrum into four components and the background
signal with high reproducibility. The root mean
square deviation (RMSD) of the regression spectrum
is 0.83 in the intensity scale of the XPS signal,
and it is consistent with the noise intensity
σ̂_noise=0.86
(=b̂^-1/2) estimated by the optimal
quasi-inverse temperature b̂=1.36 in
Eq. (<ref>).
Figure <ref>(b) shows the prior and
posterior probability distributions of the binding
energy E_k, in which the light and dark colors mean
the prior and posterior ones, and the ordinate is
on a logarithmic scale. Although the posterior
probability distribution of Gr is as broad as its
prior probability, the posterior probability
distributions become narrower in the components SiC,
S1, and S2, indicating that E_k can be
estimated with high precision. The probability
distributions for S1 and S2 are particularly
noteworthy in Fig. <ref>(b). Although
the prior probability distributions of S1 and S2
shown in light green and light magenta have overlapping
hems, the posterior probability distributions shown
in dark green and dark magenta are unimodal and noticeably
narrower with no overlap, allowing us to decompose
the hump structure into two distinguishable components
with statistical assurance.
Table <ref> also summarizes the results
of Bayesian spectroscopy in the 2ML sample.
The binding energy of S2 in the 2ML sample is
parameterized by the difference ΔE from
the binding energy of S1. The MAP estimate of
ΔE is 0.426 eV and the standard deviation
of its posterior probability distribution is
0.056 eV (<0.07 eV). The binding energy
of S2 shown in Table <ref> is the result
obtained by sampling of E_S1+ΔE.
Figure <ref>(a) also shows the results of
Bayesian spectroscopy in the 2ML sample, and the
regression spectrum indicated by a black curve can
reproduce the measured one well, and the shoulder
structure is explained by the components S1 and S2.
In contrast to the prior probabilities of the
binding energies of SiC, Gr, and S1 shown in light
colors in Fig. <ref>(b), their posterior
probability distributions shown in dark colors are
sharp and can be estimated with high precision.
Although the prior probability distribution of
E_S2 is not shown in
Fig. <ref>(b), because E_S2 is
parameterized by E_S1 and ΔE
and if these two parameters are independent of each
other, the standard deviation of its prior
probability is 0.338 eV (=√(0.33^2+0.07^2))
and is comparable to that (0.33 eV) of the prior
probability of E_S1. As shown in dark
magenta in Fig. <ref>(b), the standard
deviation of the posterior probability distribution
for E_S2 is sufficiently smaller than
its standard deviation. Such highly precise
estimation can be achieved by incorporating the
findings of the 1ML analysis as the energy
difference between S1 and S2.
§.§ Qfs-2ML sample
To estimate whether both the S1 and S2 components
are included in the shoulder structure in
Fig. <ref>(c) for the qfs-2ML sample, we
prepare models with and without the S2 component and
perform model selection. The BFEs of the model
with and without S2 are 357.9 and 364.4, respectively.
As a result, according to Eq. (<ref>),
Bayesian spectroscopy chooses the former model that
includes the S2 component. The difference in these
BFEs is Bayesian statistically apparent, and Bayesian
spectroscopy argues that S2 is required for the XPS
spectrum of the qfs-2ML sample even after accounting
for the superimposed noise intensity
σ̂_noise=1.15 (b̂=0.754).
Table <ref> summarizes the results
of Bayesian spectroscopy in the qfs-2ML sample.
The color and black curves in Fig. <ref>(a)
are the decomposed spectral components and a regression
spectrum for the qfs-2ML sample, respectively. The
black curve shows high reproducibility with the
measured one, and the estimated b̂ is
consistent with the RMSD of the regression.
Figure <ref>(b) shows on an enlarged ordinate
scale. Although we can confirm the weak S2 component
shown in magenta, it is found that the contribution of S2
is quite small. Bayesian spectroscopy searches for a
globally optimal solution that reproduces the entire
data using a model function, and it is possible to
extract even components whose peak intensity is lower
than the noise intensity <cit.>,
such as the S2 component in this case.
However, we have to evaluate the integrated intensity
A_k to discuss the presence of the buffer layer S2,
and Figure <ref> shows the posterior probability
distributions of A_S1 and A_S2 for
components S1 and S2, in which the MAP estimates
Â_S1 and Â_S2 are
indicated by vertical lines. Bayesian spectroscopy, in fact,
selects the model that includes the S2 component and gives
a non-zero MAP estimate (Â_S2 = 0.5) as
shown in Table <ref>. However, the posterior
probability of A_S2 is distributed near zero
within the non-negative value constraint, and its standard
deviation is as large as σ_A_S2 = 4.3
(>Â_S2). The results of this analysis
indicate that
the annealing procedure in air terminates the dangling bonds of Si atoms under the buffer layer with oxygen atoms,
causing the S2 component to disappear.
It also means that there are almost no dangling bonds
between the SiC substrate and graphene in the qfs-2ML
sample.
It is reported that an ideal oxidization proceedure can forms Si_2O_5 adlayer without dangling bond on Si-face of SiC substrate <cit.>. All of the dangling bonds and the covalent bonding between the buffer layer and SiC substrate would be terminated if our annealing proceedure fully oxidize the SiC substrate.
Â_S2 in Table <ref>
is approximately 1/24th of Â_S1 for the S1
component, which is considered to be the S1 component that
remained slightly after annealing.
§ DISCUSSION
According to the previous
study <cit.>, the S1
component results from
the C atoms bound to Si of SiC surface and C atoms in the buffer layer, which is approximately one third of the total C atoms in the buffer layer,
and the S2 component is the result of the remaining C atoms in the buffer layer.
In a
previous study <cit.>, 0.31 has been
reported for the integrated intensity of the S1 component
relative to the sum of the S1 and S2 components in both
1ML and 2ML samples. We obtain results consistent
with these previous
studies <cit.>.
We evaluate the posterior probability distributions of
:=A_S1/(A_S1+A_S2)
from the sampling histories of A_S1 and
A_S2, and obtain MAP estimates of 0.339 for the
1ML sample and 0.315 for the 2ML sample, respectively.
The standard deviations of their posterior probability
distributions are 0.032 and 0.020, respectively, and the
values of previous studies are included within these
ranges.
We can estimate the effective length of the covalent bond,
which gives S1, between C and Si based on this
intensity ratio , assuming that the
intensities of the XPS signal of the components S1 and S2
are equivalent and using a structural model. A portion of
the structural model at the interface of the graphene and
SiC(0001) substrate is shown in the inset of
Fig. <ref>. We consider a rectangular area of
34.045×58.971 Å^2 on the SiC surface
tiled with (√(3)×√(3)) R30^∘ unit,
in which the unit cell of graphene (brown honeycomb)
rotates with 30^∘, and 242 Si atoms and 768 C atoms
are interfaced in this rectangular area. The distance between
the SiC substrate and graphene is
2.3 Å <cit.>.
When the effective length of the covalent bond formation is extended to increase the number of covalent bonds with Si within the effective length, the ratio of the covalent bonds to all C atoms in the buffer layer increases as indicated by the blue curve in Fig. <ref>.
The horizontal lines and
their error bars are the MAP estimates
for the intensity ratio of the S1 components to the sum of S1
and S2 components and the standard deviations of the posterior
probability distributions of . Taking into
account the accuracy of the estimation of ,
the measured S1 intensity ratios can be
understood by covalently bonding to Si at distances less than
2.37 Å, shown in a light blue area in Fig. <ref>.
This is also consistent with a previous
study <cit.>.
The primary advantage of Bayesian spectroscopy is that it
provides estimates through statistical sampling in the
parameter space. However, when multiple spectral components
are expected to be contained in the tail part of a strong
spectral structure and in a hump structure, as analyzed in
this paper, the results of simple statistical sampling are
deceptive.
Our proposed method illustrated in Fig. <ref>
solves this problem and makes it possible to estimate
the material-specific parameters with high accuracy.
Consequently, the
standard deviations of the posterior probability distribution
for the binding energy, shown in
Tables <ref>–<ref> as measures of the
accuracy of the estimation of the MAP estimates, are on the
order of 100 meV even for the S2 component of the most severe
case of the qfs-2ML sample, and are less than several 10 meV
for the others, making an extremely accurate estimation of the
binding energies.
§ CONCLUSION
Using Bayesian spectroscopy, we have investigated the XPS
spectra of graphene samples
in the C 1s level. To perform the highly precise spectral
decomposition of the XPS, we first performed an exhaustive search
to explore the parameter space and then performed
spectral decomposition by Bayesian spectroscopy using designed
prior probabilities that incorporate the information based
on physical properties and the insights gained from the
posterior probability distributions evaluated in the
exhaustive search. We have succeeded in decomposing the
XPS spectra of the 1ML, 2ML, and qfs-2ML
samples to the components: graphene, SiC, and the buffer
layer atom(s)
and estimation of the binding energies
has been achieved with high precision of the order of meV.
From their binding energies, we estimated the existence
ratio of the buffer layers S1 for S2 with its standard
deviation, which is consistent with previous studies.
We performed model selection to determine the number of
components in the XPS spectrum of the qfs-2ML sample.
The four-peak model is selected, however, the contribution
of S2 is quite small and this is probably considered due
to the heterogeneity of the qfs-2ML graphene sample.
These results demonstrate that the appropriate design of
the prior probability distributions based on the
information of physical properties and the insights
gained from the exhaustive search is effective to
perform the spectral decomposition with high precision.
§ ACKNOWLEDGEMENTS (NOT COMPULSORY)
This study was supported by JST, CREST (Grant Nos. JPMJCR1861 and JPMJCR1761), Japan; and NEDO (Grant No. JPNP22100843-0), Japan.
§ AUTHOR CONTRIBUTIONS STATEMENT
All authors contributed to the study conception and design. M.I. and K.T. conducted the experiments, and H.K., K.I. and I.A. analyzed the results. H.K. wrote the first draft of the manuscript, and K.T., Y.M., M.O. and I.A. supervised the project. All authors read and approved the final manuscript.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ ADDITIONAL INFORMATION
§ COMPETING INTERESTS
The authors declare no competing interests.
|
http://arxiv.org/abs/2306.10122v1
|
20230616181423
|
Multi-Label Meta Weighting for Long-Tailed Dynamic Scene Graph Generation
|
[
"Shuo Chen",
"Yingjun Du",
"Pascal Mettes",
"Cees G. M. Snoek"
] |
cs.CV
|
[
"cs.CV",
"I.2.10"
] |
Multi-Label Meta Weighting for Long-Tailed Dynamic Scene Graph Generation]Multi-Label Meta Weighting
for Long-Tailed Dynamic Scene Graph Generation
University of Amsterdam
This paper investigates the problem of scene graph generation in videos with the aim of capturing semantic relations between subjects and objects in the form of ⟨subject, predicate, object⟩ triplets.
Recognizing the predicate between subject and object pairs is imbalanced and multi-label in nature, ranging from ubiquitous interactions such as spatial relationships (in front of) to rare interactions such as twisting. In widely-used benchmarks such as Action Genome and VidOR, the imbalance ratio between the most and least frequent predicates reaches 3,218 and 3,408, respectively, surpassing even benchmarks specifically designed for long-tailed recognition. Due to the long-tailed distributions and label
co-occurrences, recent state-of-the-art methods predominantly focus on the most frequently occurring predicate classes, ignoring those in the long tail.
In this paper, we analyze the limitations of current approaches for scene graph generation in videos and identify a one-to-one correspondence between predicate frequency and recall performance. To make the step towards unbiased scene graph generation in videos, we introduce a multi-label meta-learning framework to deal with the biased predicate distribution.
Our meta-learning framework learns a meta-weight network for each training sample over all possible label losses. We evaluate our approach on the Action Genome and VidOR benchmarks by building upon two current state-of-the-art methods for each benchmark. The experiments demonstrate that the multi-label meta-weight network improves the performance for predicates in the long tail without compromising performance for head classes, resulting in better overall performance and favorable generalizability.
Code: <https://github.com/shanshuo/ML-MWN>.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010224.10010225.10010227</concept_id>
<concept_desc>Computing methodologies Scene understanding</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010224.10010225.10010228</concept_id>
<concept_desc>Computing methodologies Activity recognition and understanding</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Scene understanding
[500]Computing methodologies Activity recognition and understanding
[
Shuo Chen, Yingjun Du, Pascal Mettes, and Cees G. M. Snoek
July 31, 2023
==============================================================
§ INTRODUCTION
Scene graph generation in videos focuses on detecting and recognizing relationships between pairs of subjects and objects.
The resulting dynamic scene graph is a directed graph whose nodes are objects with their relationships as edges in a video.
Extracting such graphs from videos constitutes a highly challenging research problem <cit.>, with broad applicability in multimedia and computer vision. Effectively capturing such structural-semantic information boosts downstream tasks such as captioning <cit.>, video retrieval <cit.>, visual question answering <cit.>, and numerous other visual-language tasks.
Current methods place a heavy emphasis on recognizing subject-to-object relationship categories.
A leading approach to date involves extracting multi-modal features for relation instances, followed by either pooling the multi-modal features <cit.> or learning a feature representation <cit.> to feed into the predicate classifier network.
Despite the strong focus on relation recognition, existing methods overlook the extremely long-tailed distribution of predicate classes.
Figure <ref> displays the recall per predicate class from STTran <cit.> and its corresponding occurrences on the Action Genome dataset.
This trend is even more pronounced on the VidOR dataset.
Figure <ref> illustrates the occurrence distribution vs. Recall@50 from Social Fabric <cit.> for the video relation detection task on the VidOR dataset, where a few head predicates dominate all other classes. This phenomenon has not been actively investigated, as the evaluation metrics do not penalize lower scores for predicates in the long tail.
In light of these observations, this paper advocates for the development for scene graph generation methods in videos that effectively handle both common and rare predicates.
We introduce a meta-learning framework to address the long-tailed dynamic scene graph generation problem.
Drawing inspiration from the concept of meta weighting <cit.>, we propose a Multi-Label Meta Weight Network (ML-MWN) to learn meta weights across both examples and classes explicitly.
These meta weights are, in turn, used to steer the downstream loss to optimize the parameters of the predicate classifier.
We adopt a meta-learning framework to optimize the ML-MWN parameters, where we compute each instance's per-class loss in a training batch and obtain a loss matrix.
The loss matrix is fed into our ML-MWN, which outputs a weight matrix, with each row representing the weight vector for an instance's loss vector.
We sample a meta-validation batch and use an unbiased meta-loss to guide the training of ML-MWN.
We adopt the inverse frequency binary cross-entropy loss as the meta-loss.
Finally, we integrate our framework with existing methods to guide the predicate classification.
To evaluate our meta-learning framework, we employ two recent state-of-the-art methods <cit.>, one for the scene graph generation task on the Action Genome dataset and one for video relation detection on the VidOR dataset.
We empirically demonstrate that our approach enhances predicate predictions for these recent methods across various evaluation metrics.
Furthermore, we show that our framework improves the performance of long-tailed predicates without hampering the performance of more common classes.
Our approach is generic and works on top of any scene graph generation method, ensuring broad applicability.
We make the code available on <https://github.com/shanshuo/ML-MWN>.
In summary, our contributions are three-fold:
1. We investigate the long-tail issue in dynamic scene graph generation and analyze the limitations of existing methods.
2. We introduce a multi-label meta-learning framework to address the biased predicate class distribution.
3. We propose a Multi-Label Meta Weight Network (ML-MWN) to explicitly learn a weighting function, which demonstrates generalization ability performance on two benchmarks when plugged into two existing approaches,
§ RELATED WORKS
Dynamic scene graph generation.
Scene graph generation was first pioneered in <cit.> for image retrieval, and the task quickly gained further traction, as seen in <cit.>.
Recently, a number of papers have identified the long-tailed distribution in image scene graphs and focused on generating unbiased scene graphs <cit.>. We seek to bring the same problem to light in the video domain.
Ji <cit.> firstly extended scene graph generation to videos and introduced the Action Genome dataset.
A wide range of works have since proposed solutions to the problem <cit.>.
Recently, Li <cit.> proposed an anticipatory pre-training paradigm based on Transformer to model the temporal correlation of visual relationships.
Similarly, the VidOR dataset collected by Shang <cit.> is another popular benchmark.
Leading approaches generate proposals <cit.> for individual objects on short video snippets, encode the proposals, predict a relation, and associate the relations over the entire video, <cit.>. Liu <cit.> generate the proposals using the sliding window way.
More recently, Gao <cit.> proposed a classification-then-grounding framework, which can avoid the high influence of proposal quality on performance.
Chen <cit.> performed a series of analyses on video relation detection.
In this paper, we use STTran <cit.> and Social Fabric <cit.> to capture the relation feature and insert our multi-label meta-weight network on top.
Cong <cit.> proposed a spatial-temporal Transformer to capture the spatial context and temporal dependencies for a dynamic scene graph.
Moreover, Chen <cit.> proposed an encoding that represents a pair of object tubelets as a composition of interaction primitives.
Both approaches provide competitive results and form a fruitful testbed for our meta-learning framework.
Multi-label long-tailed classification.
Multi-label long-tailed recognition is a challenging problem that deals with sampling differences and biased label co-occurrences <cit.>.
A few works have studied this topic, with most solutions based on new loss formulations.
Specifically, Wu <cit.> proposed a distribution-based loss for multi-label long-tailed image recognition.
More recently, Tian <cit.> proposed a hard-class mining loss for the semantic segmentation task by dynamically weighting the loss for each class based on instantaneous recall performance.
Inspired by these loss-based works, we utilize inverse frequency cross-entropy loss during our meta-learning process.
Meta learning for sample weighting.
Ren <cit.> pioneered the adoption of a meta learning framework to re-weight samples for imbalanced datasets.
Based on <cit.>, Shu <cit.> utilize an MLP to explicitly learn the weighting function.
Recently, Bohdal <cit.> presented EvoGrad to compute gradients more efficiently by preventing the computation of second-order derivatives in <cit.>.
However, these methods are targeted for multi-class single-label classification.
Therefore, we present the multi-label meta weight net for predicate classification, with an MLP that output a weight for each class loss.
§ MULTI-LABEL META WEIGHT NETWORK
Dynamic scene graph generation <cit.> takes a video as the input and generates directed graphs whose objects of interest are represented as nodes, and their relationships are represented as edges.
Each relationship edge, along with its connected two object nodes, form a ⟨subject, predicate, object⟩ semantic triplet.
These directed graphs are structural representations of the video's semantic information.
Highly related to dynamic scene graph generation, video relation detection <cit.> also outputs ⟨subject, predicate⟩ object triplets, aiming to classify and detect the relationship between object tubelets occurring within a video.
Due to the high similarity between the two tasks, we consider them both in the experiments.
For brevity, in this paper, we use the term dynamic scene graph generation to denote both tasks throughout this paper.
Action Genome <cit.> and VidOR <cit.> are two popular benchmark datasets for dynamic scene graph generation.
However, both datasets suffer from a long-tailed distribution in predicate occurrences, as shown in Figure <ref>.
The evaluation metrics forgo the class-wise differences and count all classes during inference, resulting in a trained predicate classifier with a strong bias toward head classes such as in_front_of and next_to.
Although these predicate classes are often spatial-oriented and object-agnostic, tail classes like carrying, twisting, and driving are of more interest to us.
In addition to the long-tailed distribution, predicate classification faces another challenge.
Since multiple relationships can occur between a subject-object pair simultaneously, predicate classification is a multi-label classification problem.
The co-occurrence of labels leads to head-class predicate labels frequently appearing alongside tail-class predicate labels, further exacerbating the imbalance problem.
In this paper, we propose a meta-learning framework that addresses on the long-tailed multi-label predicate classification task.
We introduce a Multi-Label Meta Weight Net (ML-MWN) to learn a weight vector for each training instance's multi-label loss.
The gradient of the sum of weighted loss is then calculated to optimize the classifier network's parameters during backward propagation.
Our model-agnostic approach can be incorporated into existing dynamic scene graph generation methods.
In particular, the framework includes two stages:
(1) Relation feature extraction, where we use existing dynamic scene graph generation methods to obtain the feature representation of the relation instances, and (2) multi-label meta-weighting learning.
We adopt a meta-learning framework to re-weight each instance's multi-label loss and propose learning an explicit weighting function that maps from training loss to weight vector.
We learn a weight vector for each training instance to re-weight its multi-label loss, multi-label binary cross-entropy loss.
We achieve this by using an MLP, which takes the multi-label training loss as input and outputs the weight vector.
We sample a meta-validation set to guide the training of MLP.
Ideally, the meta-validation set should be clean and free from the long-tailed issue, as in <cit.>.
However, we cannot sample such a clean meta-validation set due to the label-occurrence issue.
To deal with the issue, we adopt the inverse frequency binary cross-entropy loss on meta-validation set.
In the following sections, we describe the ML-MWN and the meta-learning framework in detail.
§.§ Learning weights for multi-label losses
Let x_i denote the feature representation of i-th relation instance from the training set 𝒟 and y_i ∈ℝ^C represent the corresponding multi-label one-hot vector, where 𝒟 = { x_i, y_i }_i=1^N.
The multi-label predicate classifier network is represented by f_θ with θ as its parameters.
To enhance the robustness of training in the presence of long-tailed multi-label training instances, we impose weights
w_i, c
on the i-th instance's c-th class loss l_i, c.
Instead of pre-specifying the weights based on class size <cit.>, we learn an explicit weighting function directly from the data.
Specifically, we propose the ML-MWN (Multi-Label Meta Weight Net) denoted by g_ϕ, with ϕ as its parameters, to obtain the weighting vector for each relation instance's multi-label loss.
We use the loss from f_θ as the input.
A small meta-validation set 𝒟 = {x_j, y_j}_j=1^M, where M is the number of meta-validation instances and M ≪ N, is sampled to guide the training of ML-MWN.
The meta-validation set does not overlap with the training set.
The weighted losses are then calculated to guarantee that the learned multi-label predicate classifier is unbiased toward dominant classes.
During training, the optimal classifier parameter θ^* can be extracted by minimizing the training loss:
L^train(θ) = 1/n1/C∑_i=1^n∑_c=1^C w_i, c· l_i, c ,
where n is the number of training instances in a batch, and C is the number of classes.
During inference, we only use the optimal classifier network f_θ^* for evaluation.
§.§ The meta-learning process
We adopt a meta-learning framework to update the classifier and ML-MWN.
The meta-validation set represents the unbiased relation instances following a balanced predicate class distribution.
Due to the multi-label classification label-occurrence issue <cit.>, we employ an inverse frequency BCE loss on the meta-validation set to simulate a balanced label distribution.
As illustrated in Figure <ref>, the process comprises three main steps to optimize θ and ϕ within a batch.
Suppose we are at t-th iteration during training.
First, for a batch of n training instances with corresponding feature representations and multi-labels {x_i, y_i}, 1 ≤ i ≤ n, we feed x_i into the classifier and obtain ŷ_i = f_θ^t(x_i) ∈ℝ^C.
The unweighted BCE training loss is calculated as
l_i, c(θ^t) = - y_i,c·log( ŷ_i, c(θ^t) ) + (1 - y_i, c) ·log( 1 - ŷ_i, c(θ^t) ).
Then l_i, c is fed into the ML-MWN to obtain the weight ŵ_i, c = g_ϕ^t( l_i, c (θ^t) ).
After calculating the weighted loss as ŵ_i, c· l_i, c, we update θ^t:
θ̂^t = θ^t - α1/n1/C. ∑_i=1^n ∑_c=1^C g'_ϕ^t(l_i,c(θ^t)) ∇_θ^t l_i,c(θ^t) |_θ^t,
where α is the step size. We call the updated θ̂^t the pseudo classifier parameters since they are not used for the next batch.
In the second step, we update the ML-MWN parameters based on the meta-validation loss.
We feed the meta-validation relation instance into the pseudo classifier and obtain ŷ_j = f_θ̂^t(x_j) ∈ℝ^C.
Let M_c denote the total number of relation instances belonging to predicate class c ∈{1, …, C}.
The frequency of a predicate class is calculated as freq(c) = M_c / M.
By using inverse frequency weighting, the meta-validation loss is re-balanced to mimic a balanced predicate label distribution.
We then update the ML-MWN parameters ϕ on the meta-validation data:
ϕ^t+1 = ϕ^t - β . 1/M∑_j=1^M ∑_c=1^C 1/freq(c)∇_ϕ^t l_j, c(θ̂^t) |_ϕ^t
= ϕ^t - β . ∑_c=1^C M/M_c∇_ϕ^t l_j, c(θ̂^t) |_ϕ^t ,
where β is the step size.
Lastly, the updated ϕ^t+1 is employed to output the new weights w_i, c.
The new weighted losses are used to improve the parameters θ of the classifier network:
θ^t+1 = θ^t -
α1/n1/C . ∑_i=1^n ∑_c=1^C g'_ϕ^t+1(l_i,c(θ^t)) ∇_θ^t l_i,c(θ^t) |_θ^t.
The ultimate goal is to guide the classifier network to achieve a balanced performance on the unbiased meta-validation set.
The sequences of steps are shown in Algorithm <ref>.
By alternating between standard and meta-learning, we can learn unbiased dynamic scene graphs by specifically increasing the focus on those examples and predicate classes that do not often occur in a dataset.
§ EXPERIMENTS
§.§ Datasets
§.§.§ Action Genome
<cit.> is a dataset which provides frame-level scene graph labels.
It contains 234,253 annotated frames with 476,229 bounding boxes of 35 object classes (without person) and 1,715,568 instances of 25 relationship classes.
For the 25 relationships, there are three different types: (1) attention relationships indicating if a person is looking at an object or not, (2) spatial relationships describing where objects are relative to one another, and (3) contact relationships denoting the different ways the person is contacting an object.
In AG, there are 135,484 subject-object pairs.
Each pair is labeled with multiple spatial relationships (⟨phone-in front of-person⟩ and ⟨phone-on the side of-person⟩) or contact relationships (⟨person-eating-food⟩ and ⟨person-holding-food⟩).
There are three strategies to generate a scene graph with the inferred relation distribution <cit.>:
(a) with constraint allows each subject-object pair to have one predicate at most.
(b) semi constraint allows a subject-object pair has multiple predicates. The predicate is regarded as positive only if the corresponding confidence is higher than the threshold (0.9 in the experiments).
(c) no constraint allows a subject-object pair to have multiple relationships guesses without constraint.
Evaluation metrics.
We have three tasks for evaluation following <cit.>:
(1) predicate classification (PREDCLS): with the subject and object's ground truth labels and bounding boxes, only predict predicate labels of the subject-object pair.
(2) scene graph classification (SGCLS): with the subject and object's ground truth bounding boxes given, predict the subject, object's label and their corresponding predicate.
(3) scene graph detection (SGDET): detect the subject and object's bounding boxes and predict the subject, object, and predicate's labels.
The object detection is regarded as positive if the IoU between the predicted and ground-truth box is at least 0.5.
Since traditional metrics Recall@K (R@K) are not able to reflect the impact of long-tailed data, we use the mean Recall@K (mR@K), which evaluates the R@K (K = [10, 20, 50] of each relationship class and averages them. We follow the same selection of K as <cit.>.
Implementation details.
We randomly sample 10% samples from the training set as the meta-validation set.
In line with <cit.>, we adopt the Faster-RCNN <cit.> based on the ResNet101 <cit.> as the object detection backbone.
The Faster-RCNN model is trained on AG and provided by Cong <cit.>.
We use an AdamW <cit.> optimizer with an initial learning rate 1e^-4 and batch size 1 to train our relation feature model STTran part.
We train ML-MWN using SGD with a momentum of 0.9, weight decay of 0.01, and an initial learning rate of 0.01.
We train for 10 epochs.
Other hyperparameter settings are identical to Cong <cit.>.
If not specified, the ML-MWN is an MLP of 1-100-1.
§.§.§ VidOR
<cit.> is a dataset that includes 10,000 user-generated videos selected from YFCC-100M <cit.>, totaling approximately 84 hours of footage.
It contains 80 object categories and 50 predicate categories.
Besides providing annotated relation triplets, the dataset also provides bounding boxes of objects.
VidOR is split into a training set with 7,000 videos, a validation set with 835 videos, and a testing set with 2,165 videos.
Since the ground truth of the test set is unavailable, we follow <cit.> and use the training set for training and the validation set for testing.
We report the analysis of method performance on the VidOR validation set.
Evaluation metrics.
We use the relation detection task for evaluation.
The output requires a ⟨subject, predicate, object⟩ triplet prediction, along with the subject and object boxes.
We adopt mR@K (K = [50, 100]) as the evaluation metric.
We disregard the mAP used in Chen <cit.> because we are more concerned with covering ground truth relationships belonging to tail classes during predictions.
Calculating mR@K.
For annotated video I_v, its G_v ground truth relationship triplets contain G_v, c ground truth triplets with relationship class c.
With C relationship classes, the model successfully predicts T_v, c^K triplets.
In the V videos of validation/test dataset, for relationship c, there are V_c videos containing at least one ground truth triplet with this relationship.
The R@K of relationship c can be calculated:
R@K_c=1/V_c∑_v=1,G_v, c≠0^V_cT_v, c^K/G_v, c
Then we can calculate
mR@K=1/C∑_c=1^CR@K_c.
Implementation details.
We randomly sample 10% samples from the training set as the meta-validation set.
Our experiments are conducted using 1 NVIDIA V100 GPU.
We adopt the same training strategy of Chen <cit.> for the relation feature extraction model.
First, we detect all objects in each video frame using Faster R-CNN <cit.> with a ResNet-101 <cit.> backbone trained on MS-COCO <cit.>.
The detected bounding boxes are linked with the Deep SORT tracker <cit.> to obtain individual object tubelets.
Then, each tubelet is paired with any other tubelet to generate the tubelet pairs.
We extract spatial location features <cit.>, language features, I3D features, and location mask features for each pair.
Then the multi-modal features are used as the representation of the relation instance.
For the classifier and ML-MWN, we use an SGD optimizer with an initial learning rate of 0.01 and train 10 epochs.
§.§ Multi-label meta weighting on top of the state-of-the-art
Video scene graph generation.
First, we investigate the effect of incorporating our meta-learning approach on top of existing state-of-the-art methods for scene graph generation in videos and video relation detection.
We build upon the recent STTran approach of Cong <cit.> for video scene graph generation.
We compare STTran as is and as a baseline that uses conventional meta-learning without considering the multi-label nature of scene graphs, namely MW-Net <cit.>.
Table <ref> shows the results for the with constraints setting.
Across the PredCLS, SGCLS, and SGDET tasks, incorporating our meta-learning approach improves the results.
For PredCLS, our proposed STTran + ML-MWN enhances mR@10 by 5.27, compared to the STTran baseline.
On mean recall @ 50, we improve the scores by 4.98, from 39.66 to 44.64.
On SGDET, the mean recall @ 50 increases from 22.89 to 28.52.
The MW-Net baseline already improves the STTran results, emphasizing the overall potential of meta-learning to address the long-tailed nature of scene graphs.
However, our proposed multi-label meta-learning framework performs best across all tasks and recall thresholds.
This improvement is a direct result of increasing the weight of classes in the long tail when optimizing the classifier network.
The results are consistent for the semi constraint and no constraint settings, as shown in Table <ref> and Table <ref>.
In Table <ref>, the mean recall is higher than in the with constraint setting since more predicted results are involved.
For the SGCLS task, our framework achieves 50.60% on mR@20, which is 6.33% better than STTran and 3.49% better than STTran + MW-Net.
Our framework outperforms all metrics in the no constraint setting.
In particular, for SGDET, our method reaches 27.59% at mR@10, 5.95% better than STTran, and 3.35% higher than STTran + MV-Net. We conclude that our meta learning framework is effective for video scene graph generation and can be adopted by any existing work.
In Table <ref>, the mean recall is the highest among the three settings. Unlimited predictions contribute to enhanced recall performance.
Under this setting, STTran + ML-MWN still achieves the best on all metrics across all tasks.
The results prove our method's generality on various tasks with different settings.
Video relation detection. For video relation detection, we begin with the recent Social Fabric approach by Chen <cit.>.
Table <ref> demonstrates the effect of incorporating our proposed meta learning framework for relation detection.
The Social Fabric baseline, which is the current state-of-the-art in this setting, struggles to achieve good results for relation detection using mean recall as metrics.
This underlines the problem's difficulty.
This holds similarly for the baseline by Sun etal <cit.>.
When incorporating MW-Net <cit.>, the results noticeably improve and further enhance with multi-label meta weighting.
For mR@50, adding our meta-learning on top of Social Fabric boosts the results from 2.37 to 6.35.
We conclude that multi-label meta-learning is crucial in video relation detection to achieve meaningful relation detection recalls across all classes.
§.§ Analyses, ablations, and qualitative examples
Predicate-level analysis. We present the class-wise R@10 of the predicate classification task on Action Genome in Figure <ref>.
Observing Figure <ref>, we see that our method surpasses STTran <cit.> in all predicate categories.
The improvement is much more significant for tail classes with limited training samples compared to head classes.
The superior performance demonstrates that the meta-validation set effectively guides the classifier to balance the tail classes without compromising the performance of head predicate classes.
Ablating the MLP architecture.
We conduct an ablation study on the MLP architecture for the PredCLS task on Action Genome.
Table <ref> shows the results for six structures with varying depths and widths.
We find that maximum width and depth are not necessary, with the best results achieved by the 1-100-1 variant, which we use as default in all experiments.
Qualitative examples.
We provide the qualitative results in Figure <ref> and Figure <ref>.
In Figure <ref>, we compare our method with STTran <cit.> on the Action Genome dataset.
Our method demonstrates better recognition of tail predicates in Action Genome.
In the top row, STTran incorrectly classifies the tail class beneath as the head class in front of, and sit on as touch.
In the bottom row, STTran misses drink from amongst others, while our method classifies them all correctly.
In Figure <ref>, we compare our method with Social Fabric <cit.> on the VidOR dataset.
Social Fabric fails to detect the tail class lean_on in all frames, while our method successfully predicts it.
§ CONCLUSION
Predicate recognition plays a crucial role in contemporary dynamic scene graph generation methods, but the long-tailed and multi-label nature of the predicate distribution is commonly ignored.
We observe that rare predicates on popular benchmarks are inadequately recovered or even disregarded by recent methods.
To move toward unbiased scene graph generation in videos, we propose a multi-label meta-learning framework that learns to weight samples and classes to optimize any predicate classifier effectively.
Our approach is versatile and can be incorporated into any existing methods.
Experiments demonstrate the potential of our multi-label meta-learning framework, with superior overall performance and an improved focus on rare predicates.
We believe our method could be extended to other multi-label long-tailed recognition tasks and may offer inspiration for future research.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.02073v2
|
20230603103446
|
Verifying C++ Dynamic Binding
|
[
"Niels Mommen",
"Bart Jacobs"
] |
cs.PL
|
[
"cs.PL"
] |
We propose an approach for modular verification of programs written in an object-oriented language where, like in , the same virtual method call is bound to different methods at different points during the construction or destruction of an object. Our separation logic combines Parkinson and Bierman's abstract predicate families with essentially explicitly tracking each subobject's vtable pointer. Our logic supports polymorphic destruction. Virtual inheritance is not yet supported. We formalised our approach and implemented it in our VeriFast tool for semi-automated modular formal verification of programs.
Ebola transmission dynamics: will future Ebola outbreaks become cyclic?
[
July 31, 2023
=======================================================================
§ INTRODUCTION
Despite the rise of safer alternatives like Rust, is still an extremely widely-used language, often for code that is safety- or security-critical <cit.>. Modular formal verification can be a powerful tool for gaining assurance that programs satisfy critical safety or security requirements; however, so far no modular formal verification approaches have been proposed for programs. There has been much work on modular verification of C programs, and on modular verification of object-oriented languages, including languages with multiple inheritance. However, these are not directly applicable to , in large part due to its peculiar semantics of dynamic binding during object construction and destruction. In this paper, we propose what we believe to be the first Hoare logic <cit.> for an object-oriented language that reflects 's semantics of dynamic binding in the presence of constructors and destructors. Our separation logic <cit.> combines Parkinson and Bierman's abstract predicate families <cit.> with essentially explicitly tracking each subobject's vtable pointer. Our logic also supports polymorphic destruction (applying the 𝐝𝐞𝐥𝐞𝐭𝐞 operator to an expression whose static type is a supertype of its dynamic type). Virtual inheritance, however, is not yet supported.
The remainder of this paper is structured as follows. In <ref> we introduce the syntax and operational semantics of the minimal -like language that we will use to present our approach. In <ref> we introduce our separation logic. In <ref> we illustrate an example annotated with a proof outline of our program logic. We end with a discussion of related work (<ref>) and a conclusion (<ref>).
§ A MINIMAL -LIKE LANGUAGE
The syntax of our minimal object-oriented programming language is shown in Fig. <ref>. We assume infinite disjoint sets 𝒞 of class names, ℳ of method names, ℱ of field names, and 𝒳 of variable names, ranged over by symbols C, m, f, and x, respectively. We assume ∈𝒳. For now, we also assume a set 𝒜 of assertions, ranged over by P and Q. We will define the syntax of assertions in <ref>.
A program consists of a sequence of class definitions, followed by a command that gets executed when the program starts.
For the remainder of the formal treatment, we fix a program prog. Whenever we use a class class as a proposition, we mean class ∈ prog.
For all C, we define the set bases(C) as the set of all direct base classes of C:
C : C {⋯}⇒ bases(C) = {C}
An object pointer o ∈𝒪 is either an allocation pointer of the form (id : C*) where id ∈ℕ is an allocation identifier, or a subobject pointer of the form o_C where o is an object pointer:
o ∈𝒪 (id : C*) | o_C
We use notation oC to denote the object pointed to by o has static type C:
oC⇔ (∃ id. o = (id: C*)) ∨ (∃ o'. o = o'_C)
Notice that for simplicity, the values of our language are only the object pointers and the value. Furthermore, fields and other variables are untyped and hold scalar values only. That is, objects never appear on the stack or as (non-base) subobjects of other objects.
We define a heap, ranged over by h, as a finite set of resources. Resources, ranged over by α, are defined as follows:
α(id) |(o) | o f ↦ v |oC
where (id) means that an object with allocation identifier id has been allocated, (o) means that the object pointed to by o (always an allocation pointer) has been fully constructed and is not yet being destructed. Resource o f ↦ v means that field f of the object pointed to by o has value v, and oC means that the dynamic type of the object pointed to by o (always a leaf object, whose class has no bases)[This corresponds to the fact that in , objects that have polymorphic base subobjects can reuse the (first) polymorphic base subobject's vtable pointer. Note: in this paper, for simplicity we do not consider non-polymorphic classes, i.e. classes that do not declare or inherit any virtual members.] is C.
We define dtype(o, C) as the set of all resources of its leaf base objects, or its own resource when it does not have any base objects, given that oC':
dtype(o, C) def={[ {oC} bases(C') = ∅; 1 ≤ i ≤ n⋃ dtype(o_C_i, C) bases(C') = C_1… C_n ].
We say an object pointed to by o has dynamic type C in a heap h if and only if dtype(o, C) ⊆ h. Notice that a non-leaf object has dynamic type C if and only if all of its bases have dynamic type C. As we will see, dynamically dispatched calls on an object o are dispatched to the dynamic type of o. If an object o has no dynamic type in our language, dynamically dispatched calls get stuck. As we will also see, an object o has no dynamic type while its bases are being constructed or destructed, nor while unrelated (i.e. neither enclosed nor enclosing) subobjects of the allocation are being constructed or destructed. It has a dynamic type only while its own constructor's or destructor's body, or the body of an enclosing object's constructor or destructor is executing, and between the point where its enclosing allocation is fully constructed and the point where it starts being destructed.
We use o ↓ C (o downcast to C) to denote the pointer to the enclosing object of class C of the object pointed to by o:
oC
o ↓C = o
o ↓C = o'
o_C' ↓C = o'
We use h, e ⇓ h', v to denote that when evaluated in heap h, expression e evaluates to value v and post-heap h'. Similarly, we use h, c ⇓ h' and h, o C(e) ⇓ h' and h, oC() ⇓ h' to denote that command c, constructor call o C(e), and destructor call oC(), when executed in heap h, terminate with post-heap h', respectively. These judgments are defined by mutual induction; we show selected rules in Fig. <ref>. (The complete set of rules can be found in the appendix.)
Notice, first of all, that a statically dispatched call e C::m(e) gets stuck if class C does not declare a method m, even if some base does declare such a method: in our minimal language, classes do not inherit methods from their bases. The same holds for dynamically dispatched calls.[Of course, a program that does rely on method inheritance can be trivially translated into our minimal language by inserting overrides that simply delegate to the appropriate base. Importantly, however, those overrides will have to be verified as part of the correctness proof (see <ref>); their correctness does not hold automatically.]
Evaluation of C(e) picks an unused allocation identifier id and produces (i.e. adds to the heap) (id) to mark it as used, then executes the constructor call, and finally produces (o) to mark o as fully constructed.
Executing a constructor call o C(e) is somewhat involved. If C has no bases, the argument expressions are evaluated, the fields are produced, oC is produced, and the constructor body is executed. Considered together with ODynamicDispatch, this means that dynamically dispatched calls on in the constructor body are dispatched to class C itself, even if C is not the most derived class of the allocation.
Now consider the case where C does have bases. Executing constructor call o C(e) evaluates the argument expressions and then executes each base class' constructor on the corresponding base subobject. After executing the constructor for base C_i, dtype(o_C_i, C_i) is consumed (i.e. removed from the heap); after all base subobjects have been initialized, dtype(o, C) is produced. This means that, during execution of the body of the constructor of class C, dynamically dispatched calls on o or on any base subobject of o are dispatched to class C. After an allocation of class C is fully constructed, and until it starts being destructed, its dynamic type (and that of all of its subobjects) is C.
Execution of a destructor call o C() performs the exact reverse process: it executes the destructor body, consumes dtype(o, C) and the fields, and destructs the base subobjects. Before destructing the subobject for base C_i, dtype(O_C_i, C_i) is produced, so that during execution of the body of the destructor of an object o of class C, dynamically dispatched calls on o are dispatched to class C. After destruction of an allocation completes, only the resource remains, to ensure that no future allocation is assigned the same identifier.[This reflects the fact that pointers in become invalid permanently after the allocation they point to is deallocated, even if some future allocation happens to reuse the same address.]
Deleting an object gets stuck unless its enclosing allocation is fully constructed and is not yet being destructed, as indicated by the presence of the resource. Since this resource always holds an allocation pointer, it is always the entire allocation that is destroyed, even if the argument to is a pointer to a subobject.
We use judgments h, e and h, c and h, o C(e) and h, oC() to denote that an expression, command, constructor call, or destructor call diverges (i.e. runs forever without terminating or getting stuck), respectively. These judgments' definitions can be derived mechanically <cit.> from the definitions of the termination judgments and are therefore elided.
§ A PROGRAM LOGIC FOR DYNAMIC BINDING
A class definition in our language includes a list of abstract predicates. A predicate declaration in a class defines its entry for the corresponding predicate family, i.e., a class defines its own definition for the abstract predicate, which can be overridden by derived classes. As we will see, predicate assertions involve a class index to refer to the definition of the predicate declared in that class.
We use a context Γ, which is a sequence of class definitions.
§.§ Assertions
Predicate definitions, method specifications, constructor specifications, and destructor specifications consist of assertions, ranged over by P and Q:
[ P, Q || P ∧ Q | P ∨ Q | P ∗ Q |∃ x. P; |ε f ↦ε|ε p_ε(ε) |(ε, ε) |εε; ν v | C; ε x |ν ]
where P ∗ Q is the separating conjunction of assertions P and Q, which informally means that assertion P and Q must be satisfied in disjoint portions of the heap. Assertion ε p_ε'(ε”) is a predicate assertion p with class index ε' on the target object pointed to by ε.
We show the semantics of the most interesting assertions:
[ I,h o p_C(ν) ⇔ ∃ o'. o ↓ C = o' (h, o', p, C, ν) ∈ I; I,h (o, C) ⇔ ∃ o'. o ↓ C = o' (o') ∈ h; I,h oC ⇔ dtype(o, C) ⊆ h; I,h o f ↦ v ⇔ o f ↦ v ∈ h ]
where I,h P means that assertion P is satisfied, given heap h and interpretation of predicates I. An interpretation of predicates is the least fixpoint of the program's predicate definitions considered together.
We define the assertion weakening relation Γ⊢ P ⇒_a Q by induction, where every judgment P ⇒_a Q should be read as Γ⊢ P ⇒_a Q:
[ADyntype]
oC
bases(C) = C_1 …C_n
n > 0
oC' ⇔_a o_C_1C' ∗…∗o_C_nC'
[AFrame]
P ⇒_a P'
P ∗Q ⇒_a P' ∗Q
[ATrans]
P ⇒_a P'
P' ⇒_a P”
P ⇒_a P”
[AMoveCted]
oC
C' ∈bases(C)
C' ≠C”
(o, C”) ⇔_a (o_C', C”)
[AImply]
∀I,h. I,h P ⇒I,h P'
P ⇒_a P'
[AMovePred]
oC
C' ∈bases(C)
C' ≠C”
o p_C”(ν) ⇔_a o_C' p_C”(ν)
[APredDef]
oC
C ⋯{ ⋯ p(x) = P ⋯} ∈Γ
o p_C(ν) ⇔_a P[o/,ν/x]
Weakening rule APredDef allows to switch between a predicate assertion and the definition of the predicate corresponding to the class index. The class index must be a class name declared in the program.
AMovePred and AMoveCted allow to transfer predicate and assertions between base and derived objects. It is not possible to transfer such an assertion to an object whose dynamic type is a subtype of the predicate index and allocation class, respectively.
Weakening rule ADyntype states that the dynamic type assertion of a non-leaf object can be exchanged for all dynamic type assertions of its direct base objects. This means that the dynamic type of a base object can be retrieved if the dynamic type of its direct derived object is known. The other way around, it is possible to derive the dynamic type of a derived object if the dynamic type of all its direct base classes is known.
§.§ Expression and command verification
The verification rules for the most interesting expressions and commands are listed in Fig. <ref>, together with the verification rules for constructor and destructor invocations. These rules are related to object allocation and deallocation, and static and dynamic dispatching. (The complete set of verification rules can be found in the appendix).
In method and destructor specifications, we use special variable θ to refer to the class of the target object of the call. This variable is assumed to be equal to the containing class during verification of the method or destructor. This is sound, because we require that a class overrides all methods of all its direct base classes, as we will later see. Hence when a call is dynamically dispatched, it will always be bound to the method declared in the class corresponding with the dynamic type of the target object.
Variable θ is substituted with the dynamic type of the target object and the static type of the target object during verification of dynamically dispatched calls and statically dispatched calls, respectively. This mechanism allows to use the specification for the method or destructor in the class corresponding to the static type of the method or destructor target.
§.§ Constructor verification
The verification rule for constructors follows OConstruct from our operational semantics: the direct base constructor invocations are verified in order of inheritance, prior to initializing the fields of the object and verifying the command in the constructor's body. Virtual calls are always dispatched to the (sub)object under construction.
∀oC, v.
P[v/x] = P_0
{ P_0 } o_C_1 C_1(e_1[o/,v/x]) { P_1 ∗o_C_1C_1 }
…
{ P_n-1 } o_C_n C_n(e_n[o/,v/x]) { P_n ∗o_C_nC_n }
{ P_n ∗o f ↦ ∗oC } c[o/,v/x] { Q[o/, v/x] }
Γ⊢C(x) P Q : C_1(e_1) …C_n(e_n) { c } correct in C
§.§ Behavioral subtyping
We follow Parkinson and Bierman's approach <cit.> to check whether specifications of overriding methods satisfy behavioral subtyping. A specification { P_D }{ Q_D } of an overriding method in derived class D implies a specification { P_B }{ Q_B } of a method in base class B, if for all commands c, values v and object pointers oB with a well-defined downcast o' = o ↓ D that satisfy { P_D[S_D] } c { Q_D[S_D] }, it holds that { P_B[S_B] } c { Q_B[S_B] } is also satisfied, with S_B = o/,D/θ,v/x and S_D = o'/,D/θ,v/x. This holds when a proof tree exists using the structural rules of Hoare and Separation logic, with leaves Γ⊢{ P_D[S_D] }{ Q_D[S_D] } and root Γ⊢{ P_B[S_B] }{ Q_B[S_B] }:
*
Γ⊢{ P_D[S_D] }{ Q_D[S_D] }
*
⋮
Γ⊢{ P_B[S_B] }{ Q_B[S_B] }
We use notation Γ⊢{ P_D }{ Q_D }DB{ P_B }{ Q_B } to denote that such a proof exists.
§.§ Method verification
The verification rule for correctly overriding a method checks that (1) the specification for method m in derived class C satisfies behavioral subtyping for base class C' which also declares m, and (2) recursively checks this condition for all direct base classes of C'. We use methods(C) to denote all methods declared in class C.
C ⋯{ ⋯ m(x) P Q ⋯} ∈Γ
C' ⋯{ ⋯ m(x) P' Q' ⋯} ∈Γ
Γ⊢{ P }{ Q } CC' { P' }{ Q' }
[ ∀C” ∈bases(C'). m ∈methods(C”) ⇒; Γ⊢override of m in C” correct in C ]
Γ⊢override of m in C' correct in C
Method m in class C is correct if (1) the override check for all base classes of C that declare m succeeds and (2) the method body satisfies its specification given that the target class type is C.
[ ∀C' ∈bases(C). m ∈methods(C') ⇒; Γ⊢override of m in C' correct in C ]
∀oC, v.
{ P[o/, C/θ, v/x] } c[o/, v/x] { Q[o/, C/θ, v/x] }
Γ⊢m(x) P Q { c } correct in C
§.§ Destructor verification
The verification rule for correctly overriding a destructor is similar to the verification rule for correctly overriding a method. The difference is that it recursively checks the rule for all bases because every class must declare a destructor in our language.
The verification rule for destructors again resembles the operational semantics and follows the reverse process of its corresponding constructor. The command of the body is first verified, followed by the removal of the object's fields and verification of the direct base destructor invocations in reverse order of inheritance. Virtual member invocations are dispatched to the (sub)object under destruction.
∀C' ∈bases(C).
Γ⊢override of destructor in C' correct in C
∀oC.
bases(C) = C_1 …C_n
P_0 = Q
{ P[o/, C/θ] } c[o/] { P_n ∗o f ↦ ∗oC }
{ P_n ∗o_C_nC_n } o_C_n C_n() { P_n-1 }
…
{ P_1 ∗o_C_1C_1 } o_C_1 C_1() { P_0 }
Γ⊢C() P Q { c } correct in C
§.§ Program verification
Verification of a class succeeds if verification for its constructor, destructor, and methods succeeds. We additionally require that a derived class overrides all methods declared in its base classes. This requirement renders our assumption sound that the dynamic type of the target object during verification of a destructor or method is the class type of the enclosing class it is declared in.
A program is correct if verification of all its classes succeeds, and its main command is verifiable given an empty heap.
prog = class c
⊢class correct
⊢{ } c { }
⊢program correct
Given that the program is correct, the main command, when executed in the empty heap, does not get stuck (i.e. it either terminates or diverges):
⊢program correct prog = class c ⇒∅, c ⇓_∅, c
§ EXAMPLE PROOF OUTLINE
This section shows an example in our formal language, annotated with its proof outline. It illustrates a node class which inherits from both a target class and source class . A target and source can have a source and target, respectively. A node is initially its own target and source.
The example illustrates dynamic dispatch during construction and shows that our program logic is applicable in the presence of multiple inheritance. The main command shows how our proof system can handle polymorphic deletion of objects. The proof outline for is symmetric to the one shown in , and is therefore omitted. Empty bodies implicitly contain a command.
[style=proof]code/node.cpp
The proof that the specification of implies the specification of , can be constructed as follows:
*[Right=AMovePred]
*[Right=APredDef]
*[Right=APredDef]
{ this sdynN() ∗this SokN() }
{ }
{
thiss sdynS() ∗thisT tdynT()
∗ thisS SokS() ∗thisT TokT()
}
{ }
{ this tdynN() ∗this TokN() }
{ }
{ thisT tdynN() ∗thisT TokN() }
{ }
The behavioral suptyping proofs for the specifications of and , and the proof that the specification of implies the specification of , can be established trivially using assertion weakening rule AMovePred.
§ RELATED WORK
Parkinson and Bierman's work <cit.> introduces abstract predicate families. Their proof system allows a derived class to extend a base class, restrict the behavior of its base class, and alter the behavior of the base class while preserving behavioral subtyping. Method specifications consist of a dynamic and static specification, used for dynamically and statically dispatched calls, respectively. We derive these specifications from the same specification, using special variable θ. Their proof system only accounts for single inheritance without the presence of virtual destructors.
<cit.> define operational semantics for a subset of , including construction and destruction in the presence of multiple inheritance and virtual methods that are dynamically dispatched. Their semantics encode the evolution of an object's dynamic type during construction and destruction. However, they only consider stack-allocated objects. This means that the concrete dynamic type of an object is always statically known at the point of its destruction.
<cit.> extend the work of Parkinson and Bierman to a separation logic for object-oriented programs with multiple inheritance and virtual methods calls that are dynamically dispatched. They only consider virtual inheritance, which means that an object cannot have two base subobjects of the same class type. Furthermore, their logic does not support destructors, so polymorphic deletion is not considered. In their proof system, the dynamic type of an object is fixed after allocation, whereas we model the evolution of the dynamic type of an object during its construction and destruction.
BRiCk <cit.>, built upon the separation logic of Iris <cit.>, is a program logic for . The Frama-Clang plugin of Frama-C <cit.> enables analysis of programs, supporting the ACSL specification language. Both tools support dynamic dispatching and model the evolution of an object's dynamic type through its construction and destruction. However, at the time of writing, no literature on these tools’ approaches has appeared.
§ CONCLUSION
In this paper we proposed a separation logic for modular verification of programs where virtual method calls are bound to different methods at different points during the construction and destruction of objects. Additionally, we support polymorphic destruction where the static type of an object is a supertype of its dynamic type.
We defined the operational semantics of our language related to allocation and deallocation, construction and destruction, and method dispatching, and listed the corresponding proof rules for verification.
Next, we illustrated an example program annotated with a proof outline, to support our verification approach. This example indicates that our separation logic can be used to verify dynamic binding in the presence of multiple inheritance. To our knowledge, we are the first to define a Hoare logic which reflects 's semantics of dynamic binding in the presence of constructors an destructors.
We implemented our approach <cit.> as part of our effort to extend our VeriFast tool for semi-automated modular formal verification of C and Java programs with support for . The implementation in VeriFast additionally supports bases that are non-polymorphic. One limitation is that our current operational semantics and separation logic does not consider virtual inheritance.
ACM-Reference-Format
§ OPERATIONAL SEMANTICS
The operational semantics of expressions, commands, and constructor and destructor invocations are defined by mutual induction:
[OLookup]
h, e ⇓h' ⊎o f ↦v , o
h, e f ⇓h' ⊎o f ↦v , v
[ODeleteNull]
h, e ⇓h',
h, (e) ⇓h'
[OVal]
h, v ⇓h, v
[OUpdate]
h, e ⇓h', o
h', e' ⇓h” ⊎o f ↦v , v'
h, e f e' ⇓h” ⊎o f ↦v'
[OLet]
h, e ⇓h', v
h', c[v/x] ⇓h”
h, x e c ⇓h”
[OSeq]
h, c ⇓h'
h', c' ⇓h”
h, c; c' ⇓h”
[OSkip]
h, ⇓h
[OUpcast]
h, e ⇓h', o
oC
C' ∈bases(C)
h, (C'*) e ⇓h', o_C'
[OStaticDispatch]
C ⋯{ ⋯ m(x) { c } ⋯}
h, e ⇓h', o
oC
h', e ⇓h”, v
h”, c[o/,v/x] ⇓h”'
h, e C::m(e) ⇓h”'
[ODynamicDispatch]
C ⋯{ ⋯ m(x) { c } ⋯}
h, e ⇓h', o
h', e ⇓h”, v
dtype(o, C) ⊆h”
o' = o ↓C
h”, c[o'/,v/x] ⇓h”'
h, e m(e) ⇓h”'
[OConstruct]
C : C_1 …C_n { f ; ⋯C(x) : C_1(e_1) …C_n(e_n) { c } ⋯}
h, e ⇓h_0, v
h_0, o_C_1 C_1(e_1[o/,v/x]) ⇓h_1 ⊎dtype(o_C_1, C_1)
⋮
h_n-1, o_C_n C_n(e_n[o/,v/x]) ⇓h_n ⊎dtype(o_C_n, C_n)
h_n ⊎o f ↦ ⊎dtype(o, C), c[o/,v/x] ⇓h'
h, o C(e) ⇓h'
[ONew]
o = (id: C*)
(id) ∉h
h ⊎(id) , o C(e) ⇓h'
h, C(e) ⇓h' ⊎(o, C) , o
[ODestruct]
C : C_1 …C_n { f ; ⋯ C() { c } ⋯}
h, c[o/] ⇓h_n ⊎dtype(o, C) ⊎o f ↦v
h_n ⊎dtype(o_C_n, C_n), o_C_n C_n() ⇓h_n-1
⋮
h_1 ⊎dtype(o_C_1, C_1), o_C_1 C_1() ⇓h_0
h, o C() ⇓h_0
[ODelete]
o' = o ↓C
h, e ⇓h' ⊎(o', C) , o
h', o' C() ⇓h”
h, (e) ⇓h”
§ ASSERTION SEMANTICS
The semantics of assertions are defined as follows:
[ I,h b ⇔ b =; I,h P ∗ Q ⇔ [ ∃ h_1,h_2. h = h_1 ⊎ h_2 ∧; I,h_1 P ∧ I,h_2 Q ]; I,h P ∧ Q ⇔ I,h P ∧ I,h Q; I,h P ∨ Q ⇔ I,h P ∨ I,h Q; I,h ∃ x. P ⇔ ∃ν. I,h P[ν/x]; I,h o p(C,ν) ⇔ ∃ o'. o ↓ C = o' (h, o', p, C, ν) ∈ I; I,h (o, C) ⇔ ∃ o'. o ↓ C = o' o'C∈ h; I,h oC ⇔ dtype(o, C) ⊆ h; I,h o f ↦ v ⇔ o f ↦ v ∈ h ]
where I,h P means that assertion P is satisfied, given heap h and interpretation of predicates I. Cases not listed are false.
§ PROOF RULES
We define evaluation contexts for expressions and commands as follows:
K_e ∙| K_e f | C(v K_e e) | (C*) K_e
K_c ∙| K_e | K_e f e | o f K_e | K_e C::m(e)
| o C::m(v K_e e) | K_e m(e) | o m(v K_e e)
We use the notation K[e] to denote the context K with expression e substituted for the hole ∙.
[HFrame]
{P} c {Q}
{P ∗R} c {Q ∗R}
[HConseq]
P ⇒_a P'
{ P' } c { Q' }
Q' ⇒_a Q
{ P } c { Q }
[HNull]
{ } { = }
[HPointer]
{ } o { = o }
[HLookup]
{ o f ↦v } o f { o f ↦v ∧ = v }
[HUpdate]
{ o f ↦ } o f v { o f ↦v }
[HLet]
{ P } e { Q }
∀v. { Q[v/] } c[v/x] { R }
{ P } x e c { R }
[HSeq]
{ P } c { Q }
{ Q } c' { R }
{ P } c; c' { R }
[HSkip]
{ P } { P }
[HContext]
{ P } e { Q }
∀v. { Q[v/] } K[v] { R }
{ P } K[e] { R }
[HConsContext]
{ P } e { Q }
∀v. { Q[v/] } o C(v v e) { R }
{ P } o C(v e e) { R }
[HConstruct]
C ⋯{ ⋯C(x) P Q ⋯} ∈Γ
{ P[v/x]) } oC(v) { Q[v/x, o/] }
[HNew]
C ⋯{ ⋯C(x) P Q ⋯} ∈Γ
{ P[v/x] } C(v) { Q[v/x, /] ∗(, C) }
[HDestruct]
C ⋯{ ⋯ C() P Q ⋯} ∈Γ
{ P[o/, C/θ] } o C() { Q }
[HDeleteNull]
{} () {}
[HDelete]
C ⋯{ ⋯ C() P Q ⋯} ∈Γ
oC
{ (o, C') ∗P[o/, C'/θ] } (o) { Q }
[HStaticDispatch]
C ⋯{ ⋯ m(x) P Q ⋯} ∈Γ
oC
{ P[o/,C/θ,v/x]} o C::m(v) { Q[o/, C/θ, v/x] }
[HDynamicDispatch]
C ⋯{ ⋯ m(x) P Q ⋯} ∈Γ
oC
{ oC' ∧P[o/, C'/θ, v/x] } o m(v) { Q[o/, C'/θ, v/x] }
[HExists]
∀v. { P[v/x] } c { Q }
{ ∃x. P } c { Q }
[HUpcast]
oC
C ∈bases(C')
{ P[o_C/] } (C*) o { P }
§.§ Destructor override check
C ⋯{ ⋯ C() P Q ⋯} ∈Γ
C' ⋯{ ⋯ C'() P' Q' ⋯} ∈Γ
Γ⊢{ P }{ Q } CC' { P' }{ Q' }
[ ∀C” ∈bases(C'). Γ⊢override of destructor in C” correct in C ]
Γ⊢override of destructor in C' correct in C
§.§ Class verification
class = C ⋯{ ⋯ctor dtor meth }
Γ⊢ctor correct in C
Γ⊢dtor correct in C
Γ⊢meth correct in C
Γ⊢class correct
§ SOUNDNESS
Due to the fact that our assertion language does not allow predicate assertions in negative positions (i.e. under negation or on the left-hand side of implication), we have the following property:
The semantics of assertions is monotonic in the predicate interpretation I:
I ⊆ I' I, h P ⇒ I', h P
By induction on the structure of P.
We define a function F on predicate interpretations as follows:
F(I) = {
(h, o, p, C, v) | [ C ⋯ { ⋯ p(x) = P; ⋯ }; I, h P[o/, v/x] ]}
We define the program's predicate interpretation I_program by
I_program = ⋂{I | F(I) ⊆ I}.
By the Knaster-Tarski theorem, I_program is a fixpoint of F: F(I_program) = I_program.[It is in fact the least fixpoint.] We use notation h P to mean I_program, h P.
P ⇒_a Q h P ⇒ h Q
By induction on the derivation of P ⇒_a Q.
We define semantic counterparts of the correctness judgments of our proof system as follows:
[ {P} e {Q}⇔; (∀ h, h_f. h P ⇒[ h ⊎ h_f, e; [ ∃ h', v. h ⊎ h_f, e ⇓ h' ⊎ h_f, v; h' Q[v/] ] ]); ; {P} c {Q}⇔; (∀ h, h_f. h P ⇒[ h ⊎ h_f, c; ∃ h'. h ⊎ h_f, c ⇓ h' ⊎ h_f h' Q ]); ; {P} o C(e) {Q}⇔; (∀ h, h_f. h P ⇒[ h ⊎ h_f, o C(e); ∃ h'. h ⊎ h_f, o C(e) ⇓ h' ⊎ h_f h' Q ]); ; {P} oC() {Q}⇔; (∀ h, h_f. h P ⇒[ h ⊎ h_f, oC(); ∃ h'. h ⊎ h_f, oC() ⇓ h' ⊎ h_f h' Q ]) ]
Soundness of HContext
If {P} e {Q} and ∀ v. {Q[v/]} K[v] {R} then
{P} K[e] {R}.
By induction on the structure of K.
The program is correct:
⊢program correct
[ ∀ h, h_f, P, Q. h P ⇒; (∀ e. {P} e {Q}; (∄ h', v. h⊎ h_f, e ⇓ h' ⊎ h_f, v h' Q[v/]) ⇒; h ⊎ h_f, e ); (∀ c. {P} c {Q} (∄ h'. h⊎ h_f, c ⇓ h' ⊎ h_f h' Q) ⇒; h ⊎ h_f, c ); (∀ o, C, e. {P} o C(e) {Q}; (∄ h'. h⊎ h_f, o C(e) ⇓ h' ⊎ h_f h' Q) ⇒; h ⊎ h_f, o C(e) ); (∀ o, C. {P} oC() {Q}; (∄ h'. h⊎ h_f, o C(e) ⇓ h' ⊎ h_f h' Q) ⇒; h ⊎ h_f, oC() ) ]
By mutual co-induction and, nested inside of it, induction on the derivation of the correctness judgment. We elaborate a few cases:
* Case HDynamicDispatch. Assume the following:
[ c = o m(v); oC; C ⋯{⋯ m(x) P_C Q_C ⋯}; P = oD∧ P_C[o/, D/θ, v/x]; Q = Q_C[o/, D/θ, v/x]; D ⋯{⋯ m(x) P_D Q_D {c_m}⋯} ]
By h P, we have dtype(o, D) ⊆ h and h P_C[o/, D/θ, v/x].
Let o' = o ↓ D. By the correctness of method m in class D, we have
[ {P_D[o'/, D/θ, v/x]}; c_m[o'/, v/x]; {Q_D[o'/, D/θ, v/x]} ]
By the fact that m in D correctly overrides m in C, we have
{P_D}_{Q_D}DC{P_C}_{Q_C}
It follows that
[ {P_C[o/, D/θ, v/x]}; c_m[o'/, v/x]; {Q_C[o/, D/θ, v/x]} ]
The relevant inference rule for divergence of dynamically dispatched method calls is as follows:
[ODynamicDispatchDiv3]
C ⋯{⋯ m(x) { c }⋯}
h, e ⇓ h', o
o' = o ↓ C
h', e⇓ h”, v
dtype(o, C) ⊆ h”
h”, c[o'/,v/x]
h, e m(e)
We apply this rule to the goal, which reduces the goal to h, c_m[o'/,v/x]. We now apply the coinduction hypothesis. We are now left with the job of proving that the body does not terminate, assuming that the call does not terminate. Instead, we prove that the call terminates, assuming that the body terminates. We conclude that proof by applying ODynamicDispatch.
* Case HConsContext. Assume a constructor argument list v e e. By the induction hypothesis corresponding to the first premise of HConsContext, we have that evaluation of e either terminates or diverges.
* Assume e terminates with a value v. By the induction hypothesis corresponding to the second premise of HConsContext, we have that o C(v v e) either terminates or diverges.
* Assume o C(v v e) terminates. This must be by an application of OConstruct. Therefore, it must be that e all terminate. It follows that o C(v e e) terminates.
* Assume o C(v v e) diverges. Given that e terminates, we can easily prove that o C(v e e) diverges.
* Assume e diverges. Then o C(v, e, e) diverges.
* Case HContext. We apply Lemma <ref> and use the induction hypotheses to discharge the resulting subgoals.[To see that this preserves productivity of the coinductive proof, notice that Lemma <ref> is size-preserving: given approximations up to depth d of the proof trees for the lemma's premises, the lemma produces a proof tree of depth at least d.]
|
http://arxiv.org/abs/2306.10298v1
|
20230617090224
|
On local dispersive and Strichartz estimates for the Grushin operator
|
[
"Sunit Ghosh",
"Shyam Swarup Mondal",
"Jitendriya Swain"
] |
math.AP
|
[
"math.AP",
"math.FA",
"43A80, 35R03, 35J10, 35Q40, 43A30"
] |
=20pt
amsplain
On local dispersive and Strichartz estimates associated with ...]
On local dispersive and Strichartz estimates associated with the Grushin operator
Let G=-Δ-|x|^2∂_t^2 denote the Grushin operator on ℝ^n+1. The aim of this paper is two fold. In the first part, due to the non-dispersive phenomena of the Grushin-Schrödinger equation on ℝ^n+1, we establish a local dispersive estimate by defining the Grushin-Schrödinger kernel on a suitable domain. As a corollary we obtain a local Strichartz estimate for the Grushin-Schrödinger equation. In the next part, we prove a restriction theorem with respect to the scaled Hermite-Fourier transform on ℝ^n+2 for certain surfaces in ℕ_0^n×ℝ^*×ℝ and derive anisotropic Strichartz estimates for the Grushin-Schrödinger equation and for the Grushin wave equation as well.
[
Limeng Qiao^1, ⋆,41 Yongchao Zheng^2, ⋆, † Peng Zhang^2, ⋆, † Wenjie Ding^1, ⋆
Xi Qiu^1,41 Xing Wei^2 Chi Zhang^1
^1Mach Drive ^2Xi'an Jiaotong University
{limeng.qiao, wenjie.ding, xi.qiu, chi.zhang}@mach-drive.com
{zyc573823770, zp5070}@stu.xjtu.edu.cn [email protected]
July 31, 2023
========================================================================================================================================================================================================================================================================================================================
ℂ ℚ
ℝ 𝕀
ℤ 𝔻
ℙ 𝔹
𝕊 ℍ
𝔼
ℕ
W(L^p(^d, ), L^q_v)
W_(L^p, L^q_v)
W_(L^p', L^q'_1/v)
Ł1W_(L^∞, L^1_w)
L^p(Q_1/ β, )
S^p,q_ṽ()
f
h
h'
m
g
γ
§ INTRODUCTION
Consider the free Schrödinger equation on ℝ^n:
i ∂_s u(x,s)-Δ u(x,s) = 0, x ∈ℝ^n, s ∈ℝ∖{0},
u(x,0) = f(x).
It is well known that e^-i s Δ f is the unique solution to the IVP (<ref>) and can be written as
u(·,s)=e^i |·|^2/4 s/(4 π i s)^n/2 * f .
An application of Young's inequality in (<ref>) gives the following dispersive estimate:
∀ s 0, u(·,s)_L^∞(ℝ^n)≤1/(4 π|s|)^n/2f_L^1(ℝ^n).
Such dispersive estimate is crucial in the study of semilinear and quasilinear equations which has wide applications in physical systems (see <cit.> and the references therein). The dispersive estimate (<ref>) yields the following remarkable estimate for the solution of (<ref>) by Strichartz <cit.> in connection with Fourier restriction theory:
u_L^q(ℝ, L^p(ℝ^n))≤ C(p, q)f_L^2(ℝ^n),
where (p, q) satisfies the following scaling admissibility condition
2/q+n/p=n/2
with p, q ≥ 2 and (n, q, p) ≠(2,2, ∞). We refer to <cit.> for further study on Strichartz estimates and its connection with dispersive estimates.
In this work we aim at investigating such phenomenon associated with the Grushin operator G on ℝ^n+1 defined by
G=-Δ-|x|^2∂_t^2, (x, t)∈ℝ^n×ℝ,
where |x|=√(x_1^2+⋯+x_n^2).
The studies of the Grushin operator date back to Baouendi and Grushin <cit.>. Since then several authors studied the operator extensively in different contexts, involving classification of solutions to an elliptic equations, free boundary problems in partial differential equations, well-posedness problems in Sobolev spaces etc. <cit.>. Even though numerous studies in the direction of PDEs associated with the Grushin operator are currently available, to the best of our knowledge, the study on dispersive and Strichartz estimates for the Schrödinger operator associated with the Grushin operator has not been addressed in the literature so far.
Consider the following free Grushin-Schrödinger equation:
i ∂_s u(x,t,s) - G u(x ,t,s) = 0, s ∈ℝ , (x,t)∈ℝ^n+1,
u(x,t,0) = f(x,t).
For f in L^2(ℝ^n+1), u(x, t, s)=e^-isGf(x, t) is the unique global in time solution to the above IVP (<ref>).
Unlike the Euclidean case, the solution to the IVP (<ref>) satisfies the following.
There exists a function f∈𝒮(ℝ^n+1), the set of all Schwartz class functions on ℝ^n+1, such that the solution to the IVP (<ref>) with initial data f satisfies
∀ s ∈ℝ, ∀ (x,t) ∈ℝ^n+1, u(x,t,s) = f(x,t + s n).
In <cit.>, the authors illustrate the above result for n=1. Notice that u(·,s)_p = f_p for all 1 ≤ p ≤∞, which shows that one cannot expect for a global dispersive estimate of the type (<ref>).
Due to loss of dispersion, the Euclidean strategy of finding Strichartz estimates fails, and the problem of obtaining Strichartz estimates is considerably difficult. For instance, we refer to similar situations for compact Riemannian manifolds <cit.>, for the Heisenberg group <cit.>, for the hyperbolic space <cit.> and for the nilpotent Lie groups <cit.>.
In particular, Bahouri, Gérard, and Xu <cit.> emphasized that the Schrödinger operator on the Heisenberg group ℍ^d has no dispersion at all. Using the integral representation of the Schrödinger kernel on horizontal Heisenberg strips, the authors obtained local dispersive estimates and as a by-product they establish local version of Strichartz estimate in <cit.>. Furthermore, following the general strategy of Fourier restriction methods (see <cit.>) and using Fourier analysis tools on the Heisenberg group ℍ^n, Bahouri et. al. obtained an anisotropic Strichartz estimates for the solutions to the linear Schrödinger equation as well as the wave equation, for the radial initial data, on the Heisenberg group ℍ^d involving the sublaplacian in <cit.>.
Since, Grushin operator is closely related with the sublaplacian on the Heisenberg group <cit.>, it is natural to investigate the local dispersive estimate and local Strichartz estimate for the IVP (<ref>). Our main results are as follows:
Viewing e^-i s G as an integral operator in the sense of distributions, the solution to (<ref>) does not have the formulation of type (<ref>) in terms of the Grushin-Schrödinger kernel ℋ_s. However, we compute the Grushin-Schrödinger kernel locally on a strip in the following theorem:
The kernel associated with the free Grushin-Schrödinger equation (<ref>) on the horizontal strip {(x,t;y,t_1) : |t-t_1| < n |s|} for s 0 is given by
ℋ_s(x,t;y,t_1) =1/(2 π s )^n/2+1∫_ℝ e^- λ (t-t_1)/s(|λ|/sinh (2 |λ|))^n/2 e^i|λ|/2 s (x^2+y^2) coth(2 |λ|) e^- i |λ| x· y/ s·sinh (2 |λ | ) d λ.
Using the above integral representation of the Schrödinger kernel on horizontal strips, we establish the following local dispersive estimate.
Given w_0 ∈ℝ^n+1, let f be supported in a ball B(w_0,R_0) with center at w_0 and radius R_0. Then, for any positive constant k < n and for all 2 ≤ p ≤∞, the solution to the IVP (<ref>) associated to the initial data f satisfies the following local dispersive estimate:
u(·,s)_L^p(B(w_0,1/2 k |s| ))≤(M/|s|^n/2+1)^1-2/pf_L^p'(ℝ^n+1),
for all |s| ≥2 R_0/n-k, with M = 1/(2 π )^n/2+1∫_ℝ(|λ|/sinh (2 |λ|))^n/2 e^n|λ|/2 dλ
and 1/p + 1/p' = 1.
Using the local dispersive estimate (<ref>), we only obtain the local solution of (<ref>) and establish the following local Strichartz estimate.
Given k < n, if (p, q) lies in the admissible set
A_0 = {(p,q) : 2 ≤ p ≤∞ and 1/p + 1/q = 1/2},
then for all f ∈ L^2(ℝ^n+1) supported in the ball B(w_0,R_0), with some w_0 ∈ℝ^n+1, the solution to the IVP (<ref>) associated to the initial data f satisfies the following local Strichartz estimate:
u_L^q((-∞,-C_k R_0])∪ [C_kR_0,∞);L^p(B(w_0,1/2 k s ))≤ C(q,k) f_L^2(ℝ^n+1),
where C_k = 2/n-k.
The remaining part of the paper aims to establish anisotropic Strichartz estimate for the Grushin-Schrödinger equation (<ref>) and for the Grushin wave equation:
∂_s^2 u (x,t,s) + Gu(x,t,s) = 0 s ∈ℝ , (x,t)∈ℝ^n+1,
u (x,t,0) = f(x,t), ∂_su(x,t,0) = g(x,t).
In order to achieve this goal we first obtain the restriction theorem analogue to Bahouri et. al. <cit.> (see also Müller <cit.> ) for the scaled Hermite-Fourier transform (defined below) restricted to certain surfaces in ℕ_0^n ×ℝ^* ×ℝ. At this stage, we refer to Liu and Song <cit.> for a restriction theorem associated with the Grushin operator.
We consider the mixed Lebesgue spaces L^r_t(ℝ;L^q_s(ℝ;L^p_x(ℝ^n))), for 1 ≤ p,q ,r ≤∞ with the mixed norm
f_L_t^r L_s^q L_x^p = (∫_ℝ(∫_ℝ(∫_ℝ^n+1 |f(x,t,s)|^p dx)^q/pds)^r/qdt)^1/r.
For f ∈𝒮(ℝ^n+2), the set of all Schwartz class functions on ℝ^n+2
, let
f^λ,ν(x)=∫_ℝ∫_ℝ f(x, t, s) e^i λ te^i ν s d t ds
stand for the inverse Fourier transform of f(x, t, s) in the (t, s) variable. We define the scaled Hermite-Fourier transform of f on ℝ^n+2 as
f̂(α,λ,ν)=∫_ℝ^n∫_ℝ∫_ℝ e^iλ t e^ i ν sf(x,t,s)Φ_α^λ(x) ds dt dx = ⟨ f^λ,ν,Φ_α^λ⟩,
for any (α,λ,ν) ∈ℕ_0^n ×ℝ^* ×ℝ. Given a surface S in ℕ_0^n ×ℝ^* ×ℝ endowed with an induced measure d σ, we define the restriction operator ℛ_S: L^2(ℝ^n+2)→ L^2(S,dσ) defined by
ℛ_S f = f̂|_S,
on the surface S
and the operator dual to ℛ_S (called the extension operator) as
ℰ_S ()(x,t,s) = 1/(2 π)^2∫_S e^-i ν s e^-i λ t (α,λ,ν)Φ_α^λ (x) dσ,
∈ L^2(S,dσ).
Taking the surface S = {(α,λ,ν) ∈ℕ_0^n ×ℝ^* ×ℝ : ν = (2 |α| + n )|λ| } with a localized induced measure dσ_loc (defined in Section <ref> precisely), we obtain the following restriction theorem for Scaled Hermite-Fourier transform.
If 1 ≤ q ≤ p < 2, then
ℛ_S_loc f_L^2(S,dσ_loc)≤ C(p,q) f_L_t^1L^q_sL^p_x,
for all functions f ∈𝒮(ℝ^n+2).
By duality, Theorem <ref> can be reframed as follows: for any 2 < p' ≤ q' ≤∞,
ℰ_S_loc()_L_t^∞ L^q'_s L^p'_x≤ C(p,q)_L^2(S,dσ_loc)
holds for all ∈ L^2(S,dσ_loc).
Now, realizing the solution of (<ref>) as the extension operator ℰ_loc acting on a suitable function on S, using (<ref>) and the density of frequency localized functions, we prove the following anisotropic Strichartz estimates for the solutions to the free Grushin-Schrödinger equation.
Let f∈ L^2(ℝ^n+1). If (p, q) lies in the admissible set
A = {(p,q) : 2 < p ≤ q ≤∞ 2/q + n/p = n+2/2},
then the solution u(x,t,s) = e^- i s G f(x,t) of the IVP (<ref>) is in L^∞_t(ℝ;L^q_s(ℝ;L^p_x(ℝ^n))) and satisfies the estimate:
e^-i s G f(x,t)_L_t^∞ L_s^q L_x^p≤ Cf_L^2(ℝ^n+1).
Arguing as in the Theorem <ref>, for the surface S_0 (defined in Remark <ref>) and making use of scaled Hermite-Fourier restriction theorem for the surface S_0, we prove the following anisotropic Strichartz estimate for the solution to the free Grushin wave equation.
Let f ∈ L^2(ℝ^n+1) and G^-1/2g ∈ L^2(ℝ^n+1). If (p, q) lies in the admissible set
A_w = {(p,q) : 2 < p ≤ q ≤∞ 1/q + n/p = n+2/2},
then the solution u(x,t,s) of the IVP (<ref>) is in L^∞_t(ℝ;L^q_s(ℝ;L^p_x(ℝ^n))) and satisfies the estimate:
u(x,t,s)_L_t^∞ L_s^q L_x^p≤ C(f_L^2(ℝ^n+1) + G^-1/2g_L^2(ℝ^n+1)).
We prove anisotropic Strichartz estimates for the inhomogeneous Grushin-Schrödinger equation in Theorem <ref> and for the inhomogeneous Grushin wave equation in Theorem <ref> as an application of Theorem <ref> and Theorem <ref>. Further, we discuss the validity of the restriction Theorem <ref> for p = 2, 1 ≤ q ≤2 in Proposition <ref> and Proposition <ref>. We also investigate the validity of the Theorems <ref>, <ref>, <ref>, <ref> for p = 2, 2 ≤ q ≤∞.
We use the following notation through out this article:
* ∑_± f(±·) = f(+ ·) + f(- ·) and ∑_± f(∓·) = f(- ·) + f(+ ·).
* ℱ_λ→ s (f(λ)) = 1/2 π∫_ℝ e^-i s λ f(λ) dλ.
This paper is organized as follows: We discuss about the spectral theory of the Grushin operator, properties of scaled Hermite Fourier transform on ℝ^n+1, the Grushin heat kernel and the Grushin-Schrödinger kernel in Section <ref>. In section <ref> we compute the Grushin-Schrödinger kernel on certain horizontal strips and obtain the local dispersive and local Strichartz estimate for the free Schrödinger equation. In Section <ref> we prove the restriction theorem for the scaled Hermite Fourier transform and derive the anisotropic Strichartz estimates for the solutions to IVP (<ref>) and IVP (<ref>) in Section <ref>. In Section <ref> we obtain anisotropic Strichartz estimates for the inhomogeneous Grushin-Schrödinger equation and inhomogeneous Grushin wave equation. Finally we conclude with discussing the validity of anisotropic Strichartz estimates obtained in Section <ref> and <ref> for p = 2, 2 ≤ q ≤∞ in Section <ref>.
§ THE GRUSHIN OPERATOR AND THE GRUSHIN-SCHRÖDINGER KERNEL
In this section we discus the spectral theory for the Grushin operator and the Fourier analysis tools associated with the Grushin operator. We make use of Mehler's formula to write the integral representation of the Grushin heat kernel. We also define the Grushin-Schrödinger kernel in the sense of distributions.
§.§ Spectral theory for the (scaled) Hermite and the Grushin operator:
Let H_k denote the Hermite polynomial on ℝ, defined by
H_k(x)=(-1)^k d^k/dx^k(e^-x^2 )e^x^2, k=0, 1, 2, ⋯ ,
and h_k denote the normalized Hermite functions on ℝ defined by
h_k(x)=(2^k√(π) k!)^-1/2 H_k(x)e^-1/2x^2, k=0, 1, 2, ⋯.
The higher dimensional Hermite functions denoted by Φ_α are then obtained by taking tensor product of one dimensional Hermite functions. Thus for any multi-index α∈ℕ_0^n and x ∈ℝ^n, we define
Φ_α(x)=∏_j=1^nh_α_j(x_j). For λ∈ℝ^* = ℝ∖{0}, the scaled Hermite functions are defined by Φ^λ_α(x) = |λ|^n/4 Φ_α(√(|λ|)x), they are the eigenfunctions of the (scaled) Hermite operator H(λ)=-Δ +λ^2 |x|^2 with eigenvalues (2|α|+n)|λ|, where |α |=∑_j=1^nα_j, α∈ℕ_0^n. For each λ∈ℝ^*, the family {Φ_α^λ : α∈ℕ_0^n} is then an orthonormal basis for L^2(ℝ^n). For each k ∈ℕ, let P_k(λ) stand for the
orthogonal projection of L^2(ℝ^n) onto the eigenspace of H(λ) spanned by {Φ_α^λ :|α|=k}. More precisely, for f ∈ L^2(ℝ^n)
P_k(λ) f = ∑_|α| = k⟨ f, Φ_α^λ⟩Φ_α^λ,
where ⟨·,·⟩ denote the standard inner product in L^2(ℝ^n).
Then the spectral decomposition of H(λ) is explicitly given as
H(λ) f=∑_k=0^∞(2 k+n)|λ| P_k(λ) f.
Note that
P_k(λ) f (x)= P_k(1)(f∘ d_|λ|^-1/2)∘ d_|λ|^1/2(x),
where the dilations d_r on ℝ^n is defined by d_r(x) = rx for r > 0.
For a Schwartz function f on ℝ^n+1, let f^λ(x)=∫_ℝ f(x, t) e^i λ t d t
denotes the inverse Fourier transform of f(x, t) in the t variable. Then it follows that
G f(x, t)=1/2π∫_ℝ e^-i λ t H(λ) f^λ(x) d λ,
where the operator
H(λ ) = -Δ+ λ^2|x|^2, for λ≠ 0, is called the (scaled) Hermite operator on ℝ^n.
It is well known that the Grushin operator belongs to the wide class of subelliptic operators <cit.>. Moreover, it is positive, self-adjoint, and hypoelliptic operator.
Applying the operator G to the Fourier expansion f(x, t)=1/2π∫_ℝ e^-i λ t f^λ(x) d λ, we see that
G f(x, t)=1/2π∫_ℝ e^-i λ t H(λ) f^λ(x) d λ.
Using (<ref>), the spectral decomposition of the Grushin operator is given by
G f(x, t)=1/2 π∫_ℝ e^-i λ t(∑_k=0^∞(2 k+n)|λ| P_k(λ) f^λ(x)) d λ.
§.§ The scaled Hermite-Fourier transform on ℝ^n+1:
For a reasonable function f the scaled Fourier-Hermite transform is defined by
f̂(α,λ)=∫_ℝ^n∫_ℝ e^iλ tf(x,t)Φ_α^λ(x) dt dx = ⟨ f^λ,Φ_α^λ⟩, (α,λ) ∈ℕ_0^n ×ℝ^*.
If f ∈ L^2(ℝ^n+1) then f̂∈ L^2(ℕ^n_0×ℝ^*) and satisfies the Plancherel formula
f_L^2(ℝ^n+1) = 1/2 πf̂_L^2(ℕ_0^n×ℝ^*).
The inversion formulae is given by
f(x,t) = 1/2 π∫_ℝ e^-i λ t∑_α∈ℕ^n_0f̂(α,λ)Φ_α^λ (x )d λ.
It f ∈ L^1(ℝ^n+1), it can be seen that
(f∘δ_r)(α,λ) = r^-(n/2 + 2)f̂(α,r^-2λ) ,
where the anisotropic dilation on ℝ^n+1 is defined by δ_r(x,t) = (r x,r^2t) for r > 0.
Replacing f by Gf in (<ref>) and comparing (<ref>) with (<ref>), we get
(Gf)(α,λ) = (2|α| + n) |λ| f̂(α,λ), (α,λ) ∈ℕ_0^n ×ℝ^*.
For f ∈𝒮(ℝ^n+1), using (<ref>) and the fact that |Φ_α^λ(x) | ≤ |λ|^n/4, x ∈ℝ^n+1, for all N ≥ 0 we get
|(1 + (2|α| + n) |λ|)^N f̂(α,λ)| ≤ |λ|^n/4(1 + G)^Nf_L^1(ℝ^n+1),
for all (α,λ) ∈ℕ_0^n ×ℝ^*, and hence we deduce that
|(1 + (2|α| + n) |λ|)^N f̂(α,λ)| ≤ |λ|^n/4 C_N,f
for all N ≥ 0 and for all (α,λ) ∈ℕ_0^n ×ℝ^*.
§.§ The heat kernel and the Schrödinger kernel for the Grushin operator
Consider the free heat equation associated with the Grushin operator G:
∂_s u(x,t,s) + G u( x,t,s) = 0, s ≥ 0 , (x,t)∈ℝ^n+1,
u(x,t,0) = f(x,t),
for an integrable function f on ℝ^n+1. It is easy to check that e^-sGf is the unique solution to the IVP (<ref>). Using functional calculus for G, the solution u(x,t,s) can be written as
u(x,t,s) = e^-sG f (x,t)
= 1/2π∫_ℝ e^-i λ t∑_α∈ℕ^n e^-s(2|α|+n)|λ|f̂(α,λ)Φ_α^λ(x) dλ
= ∫_ℝ^n∫_ℝ(1/2 π∫_ℝ e^-iλ(t-t_1)∑_α∈ℕ^n e^-s(2|α|+n)|λ|Φ_α^λ(x)Φ_α^λ(y) dλ) f(y,t_1) dy dt_1
= ∫_ℝ^n∫_ℝ K_s(x,t;y,t_1) f(y,t_1) dy dt_1,
where K_s(x,t;y,t_1) is the Grushin heat kernel given by
K_s(x,t;y,t_1) = 1/2 π∫_ℝ e^-iλ(t-t_1)∑_α∈ℕ^n e^-s(2|α|+n)|λ|Φ_α^λ(x)Φ_α^λ(y) dλ.
In view of Mehler's formula <cit.>, the series in (<ref>) can be further simplified and we write
K_s (x,t; y, t_1)=1/(2π)^n/2+1∫_ℝ e^-i λ (t-t_1)(|λ|/sinh (2 s|λ|))^n/2 e^-|λ|/2(x^2+y^2) coth(2s|λ|) e^|λ| x· y/sinh (2s |λ | ) d λ.
Performing the change of variable sλ↦λ, the heat kernel can also be written as,
K_s (x,t; y, t_1)=1/(2 π s )^n/2+1∫_ℝ e^-i λ (t-t_1)/s(|λ|/sinh (2 |λ|))^n/2 e^-|λ|/2 s (x^2+y^2) coth(2 |λ|) e^|λ| x· y/ s·sinh (2 |λ | ) d λ.
As in the case of the heat equation for the Grushin operator, using functional calculus for G the unique solution of the IVP (<ref>) is given by
u(x,t,s) = e^-i s G f (x,t)
= 1/2π∫_ℝ^* e^-i λ t∑_α∈ℕ^n e^-i s (2|α|+n)|λ|f̂(α,λ)Φ_α^λ(x) dλ.
and defined in the sense of tempered distribution as,
∫_ℝ^n+1 u(x,t,s)φ(x,t) dx dt = ∫_ℝ^n∫_ℝ(1/2 π∫_ℝ^* e^iλ t_1∑_α∈ℕ^n e^-i s (2|α|+n)|λ|φ̂(α,-λ) Φ_α^λ(y) dλ) f(y,t_1) dy dt_1
= ∫_ℝ^n∫_ℝ(∫_ℝ^n+1ℋ_s(x,t;y,t_1)φ(x,t) dx dt ) f(y,t_1) dy dt_1,
for φ∈𝒮(ℝ^n+1), where ℋ_s(x,t;y,t_1) is the Grushin-Schrödinger kernel given by
∫_ℝ^n+1ℋ_s(x,t;y,t_1)φ(x,t) dx dt =1/2 π∫_ℝ^* e^iλ t_1∑_α∈ℕ^n e^-i s (2|α|+n)|λ|φ̂(α,-λ) Φ_α^λ(y) dλ.
We proceed to prove Proposition <ref>.
Proof of Proposition <ref> :
Fix a function Q ∈ C_c^∞((1,∞)) and consider
f(x,t) = 1/2 π∫_1^∞ e^-i λ tΦ_0^λ(x) Q(λ) dλ.
Thus f∈𝒮(ℝ^n+1) and
comparing (<ref>) with the inversion formula (<ref>) we have
f̂(α,λ)=
{[ 0, if α≠ 0, α∈ℝ^*; Q(λ), if α = 0, α∈ℝ^*. ].
By (<ref>), the solution of the IVP (<ref>) can be written as
u(x,t,s) = e^-isGf(x,t) = 1/2 π∫_1^∞ e^-i λ (t + ns)Φ_0^λ(x) Q(λ) dλ = f(x,t + ns).
□
§ LOCAL DISPERSIVE AND LOCAL STRICHARTZ ESTIMATE
In order to establish the local dispersive and local Strichartz estimate we need to compute the Grushin-Schrödinger kernel on strips.
Observe that z ↦ H_1(z) = 1/(2 π)^n∫_ℝ^n e^i x ξ e^- z |ξ|^2 dξ z ↦ H_2(z) = 1/(4 π z)^n/2 e^- |x|^2/4 z are holomorphic functions on the Right half plane. Since H_1(z) = H_2(z) when z ∈ℝ and Re (z) > 0, H_1(z) = H_2(z) in the Right half plane. Let α be a non-zero purely imaginary number and let w_n be a sequence in the Right half plane converging to α. Then an application of dominated convergence theorem ensures that H_1(w_n) and H_2(w_n) converges in the sense of distribution. Based on this observation the Schrödinger kernel can be defined for the Euclidean case. We use a similar idea to compute the Grushin-Schrödinger kernel on the horizontal strip defined in Theorem <ref>.
Fix 0 < C < n, the maps
z ↦ K^1_z(x,t;y,t_1) and z ↦ K^2_z(x,t;y,t_1)
K^1_z(x,t;y,t_1) = 1/2 π∫_ℝ^* e^-iλ(t-t_1)∑_α∈ℕ^n e^-z (2|α|+n)|λ|Φ_α^λ(x)Φ_α^λ(y) dλ,
K^2_z(x,t;y,t_1) = 1/(2 π z )^n/2+1∫_ℝ^* e^-i λ (t-t_1)/z(|λ|/sinh (2 |λ|))^n/2 e^-|λ|/2 z (x^2+y^2) coth(2 |λ|) e^|λ| x· y/ z·sinh (2 |λ | ) d λ,
for all (x,t), (y,t_1) ∈ℝ^n+1, are holomorphic on D = {z ∈ℂ : Re(z) > 0} and D̃_|t-t_1| = {z ∈ℂ : |z| > |t-t_1|/C Re(z) > 0 } respectively. Moreover, K^1_z = K^2_z on the whole domain D̃_|t-t_1|.
Interchanging summation and integration in (<ref>) and performing the change of variable (2|α| + n)λ↦λ in each integral, we get
K^1_z(x,t;y,t_1) = 1/2 π ∑_α∈ℕ^n1/(2|α| + n)^n/2+1
×∫_ℝ e^-iλ(t-t_1)/2 |α| + n e^-z |λ|Φ_α(√(|λ|/2 |α| + n) x ) Φ_α(√(|λ|/2 |α| + n) y) |λ|^n/2dλ,
where each term of the above summation is an entire function. Since, |Φ_α(x)| ≤ 1 uniformly, we obtain
∫_ℝ|e^-iλ(t-t_1)/2 |α| + n e^-z |λ|Φ_α(√(|λ|/2 |α| + n) x ) Φ_α(√(|λ|/2 |α| + n) y) | |λ|^n/2dλ≤ C ∫_ℝ e^-a|λ| |λ|^n/2 dλ < ∞
and
∫_ℝ|∂_z( e^-iλ(t-t_1)/2 |α| + n e^-z |λ|Φ_α(√(|λ|/2 |α| + n) x ) Φ_α(√(|λ|/2 |α| + n) y) ) | |λ|^n/2+1dλ
≤ C ∫_ℝ e^-a|λ| |λ|^n/2+1 dλ < ∞,
for all z ∈ℂ with Re(z) ≥ a > 0, By Lebesgue derivation theorem the map z ↦ K^1_z is holomorphic on the domain D.
Again (<ref>) can be re-written as
K^2_z(x,t;y,t_1) = 1/(2 π z )^n/2+1∫_ℝ(|λ|/sinh (2 |λ|))^n/2 e^-i λ (t-t_1)/z e^-|λ|X(x,y,λ)/2 z d λ,
where X(x,y,λ) = (x^2+y^2) cosh(2 |λ|) - 2 x· y/sinh (2|λ|)≥ 0 for all λ∈ℝ and x,y ∈ℝ^n.
Setting z = |z|e^i (z), we have
| e^-|λ|X(x,y,λ)/2 z | = e^-|λ|X(x,y,λ)/2 |z| cos( (z))
|∂_z e^-|λ|X(x,y,λ)/2 z | =|λ| X(x,y,λ)/2 |z|^2 e^-|λ|X(x,y,λ)/2 |z| cos( (z)).
Also one can easily check that
|e^-i λ (t-t_1)/z| = e^- λ (t-t_1) sin ((z))/|z|
|∂_z e^-i λ (t-t_1)/z| = |t-t_1||λ|/|z|^2e^- λ (t-t_1) sin ((z))/|z|.
Hence for all λ∈ℝ^*, (x,t),(y,t_1) ∈ℝ^n+1 and all z ∈ℂ with Re (z) ≥ a >0,
|e^-i λ (t-t_1)/z-|λ|X(x,y,λ)/2 z | ≤ e^|λ| |t-t_1|/|z| |∂_z ( e^-i λ (t-t_1)/z-|λ|X(x,y,λ)/2 z )| ≤ e^|λ| |t-t_1|/|z|( |λ||t-t_1|/|z|^2 + 1/a)
Taking 0 < C < n and combining formula (<ref>) together with the Lebesgue derivation theorem, we deduce that the map z ↦ K^2_z is holomorphic on
D̃_|t-t_1|.
By (<ref>) and (<ref>), the maps K^1_z and K^2_z coincide on the intersection of the real line with D̃_|t-t_1| , we conclude that K^1_z = K^2_z on D̃_|t-t_1|.
With this information, we prove Theorem <ref> using the notations used in Proposition <ref>.
Proof of Theorem <ref>:
Choose a sequence (z_p)_p ∈ℕ of elements in D̃_|t-t_1| which converges to i s, for some s ∈ℝ^*. For f ∈𝒮(ℝ^n+1),
∫_ℝ^n∫_ℝ K^1_z_p(x,t;y,t_1) f(y,t_1) dy dt_1 = 1/2π∑_α∈ℕ_0^n∫_ℝ e^-i λ t e^-z_p (2|α|+n)|λ|f̂(α,λ)Φ_α^λ(x) dλ.
From (<ref>), we have
|f̂(α,λ)| ≤ C_N,f |λ|^n/4 (1 + (2|α| + n)|λ|)^-N, for N ≥ n + 2.
Therefore,
∑_α∈ℕ_0^n∫_ℝ|e^-i λ t e^-z_p (2|α|+n)|λ|f̂(α,λ) Φ_α^λ(x)| dλ
≤ C_N,f ∑_α∈ℕ_0^n1/(2|α|+ n)^n/2 + 1∫_ℝ (1 + |λ|)^-N |λ|^n/2 dλ < ∞,
where the last term obtained by performing the change of variables (2|α| + n)λ↦λ in each integral. Applying Lebesgue dominated convergence theorem, we get
lim_p →∞∫_ℝ^n∫_ℝ K^1_z_p(x,t;y,t_1) f(y,t_1) dy dt_1
= e^-isGf(x,t).
Using the fact that |z_p| > |t-t_1|/C and 0 < C < n together with (<ref>), gives
∫_ℝ| (|λ|/sinh (2 |λ|))^n/2 e^-i λ (t-t_1)/z_p e^-|λ|X(x,y,λ)/2 z_p | d λ≤∫_ℝ(|λ|/sinh (2 |λ|))^n/2 e^C|λ| d λ < ∞.
Applying Lebesgue dominated convergence theorem, we deduce that
lim_p→∞ K^2_z_p(x,t;y,t_1) = 1/(2 π s )^n/2+1∫_ℝ e^- λ (t-t_1)/s(|λ|/sinh (2 |λ|))^n/2 e^i|λ|/2 s (x^2+y^2) coth(2 |λ|) e^- i |λ| x· y/ s·sinh (2 |λ | ) d λ
for all (x,t),(y,t_1) ∈ℝ^n+1 satisfying |t-t_1| < n |s|.
The proof of Theorem <ref> follows from (<ref>), (<ref>) and the fact that K^1_z_p = K^2_z_p on D̃_|t-t_1|.
□
Proof of Theorem <ref>: Since the linear Schrödinger equation on ℝ^n+1 is invariant by translation, it suffices to prove the result for w_0 = 0. Let f is supported in B(w_0,R_0), the solution to the IVP (<ref>) with the given initial data f is given by
u(x,t,s) = ∫_ℝ^n∫_ℝℋ_s(x,t;y,t_1) f(y,t_1) dy dt_1
on any ball B(0,1/2 k |s| ) with 0 < k < n, where
ℋ_s(x,t;y,t_1) =1/(2 π s )^n/2+1∫_ℝ e^- λ (t-t_1)/s(|λ|/sinh (2 |λ|))^n/2 e^i|λ|/2 s (x^2+y^2) coth(2 |λ|) e^- i |λ| x· y/ s·sinh (2 |λ | ) d λ:
Since for any (x,t) ∈ B(0,1/2 k |s| ) and any (y,t_1) ∈ B(0,R_0), we have
|t-t_1| < 1/2 k |s| + R_0 < 1/2 n |s| < n |s|
provided that |s| > 2 R_0/n-k.
Note that
ℋ_s_L^∞(B(0,1/2 k |s| )× B(0,R_0))≤1/(2 π |s| )^n/2+1∫_ℝ(|λ|/sinh (2 |λ|))^n/2 e^n|λ|/2 dλ := M/|s| ^n/2+1,
for |s| > 2 R_0/n-k. Using (<ref>), we obtain the following L^1 - L^∞ estimate,
u(·,s)_L^∞ (B(w_0,1/2 k |s| ))≤M/|s|^n/2+1f_L^1(ℝ^n+1).
Furthermore, using the unitarity of e^-isG, we have
u(·,s)_L^2 (B(w_0,1/2 k |s| ))≤u(·,s)_L^2(ℝ^n+1) = f_L^2(ℝ^n+1).
Interpolating (<ref>) and (<ref>), for all 2 ≤ p ≤∞ and for all |s| > 2 R_0/n-k, we get (<ref>).
□
As a by-product of (<ref>), we obtain the local Strichartz estimate for the IVP (<ref>) under certain admissible condition for the pair (p,q).
Proof of Theorem <ref> :
Since f is supported in B(w_0,R_0), applying the Hölder inequality, we obtain f_L^p'(ℝ^n+1)≤ R_0^(n+1)(1/2-1/p)f_L^2(ℝ^n+1) and hence for all 2 ≤ p ≤∞, (<ref>) becomes
u(·,s)_L^p(B(w_0,1/2 k s ))≤ C(k) R_0^(n+1)(1/2-1/p)/|s|^(n+2)(1/2-1/p)f_L^2(ℝ^n+1),
for all |s| ≥ C_k R_0. Using the fact 1/p + 1/q = 1/2 we get (<ref>).
□
§ RESTRICTION THEOREM ON ℕ^N_0×ℝ^*×ℝ
For a Schwartz class function f on ℝ^n+2, the scaled Hermite-Fourier transform of f on ℝ^n+2 is defined by
f̂(α,λ,ν)=∫_ℝ^n∫_ℝ∫_ℝ e^iλ t e^ i ν sf(x,t,s)Φ_α^λ(x) ds dt dx = ⟨ f^λ,ν,Φ_α^λ⟩,
for any (α,λ,ν) ∈ℕ_0^n ×ℝ^* ×ℝ.
If f ∈ L^2(ℝ^n+2) then f̂∈ L^2(ℕ^n_0×ℝ^*×ℝ) and satisfies the Plancherel formula
f_L^2(ℝ^n+2) = 1/(2 π)^2f̂_L^2(ℕ_0^n×ℝ^*×ℝ).
The inversion formula is given by
f(x,t,s) = 1/(2 π)^2∫_ℝ∫_ℝ e^-i ν s e^-i λ t∑_α∈ℕ^n_0f̂(α,λ,ν)Φ_α^λ (x )d λ dν.
§.§ A surface measure:
Let us consider the surface
S = {(α,λ,ν) ∈ℕ_0^n ×ℝ^* ×ℝ : ν = (2 |α| + n )|λ| }.
We endow S with the measure dσ induced by the projection π : ℕ_0^n ×ℝ^* ×ℝ→ℕ_0^n ×ℝ^* onto the first two factors, where ℕ_0^n ×ℝ^* endowed with the measure dμ⊗ dλ, dμ and dλ denote the counting measure on ℕ^n_0 and Lebesgue measure on ℝ^* respectively. More explicitly, for any integrable function on S we have
∫_S dσ = ∑_α∈ℕ^n_0∫_ℝ^* (α,λ,(2|α|+n)|λ|) dλ.
By construction it is clear that if Θ = f̂∘π|_S, where f̂ is a function on ℕ_0^n ×ℝ^* ×ℝ, then for all 1≤ p ≤∞
Θ_L^p(S,dσ) = f̂_L^p(ℕ_0^n ×ℝ^* ×ℝ).
In view of Fourier restriction theorem for smooth compact surfaces due to Tomas <cit.> in the Euclidean space, (<ref>) is well-defined for an appropriate function f and holds good for the compact subsets of S. Therefore, we consider the surface S endowed with the surface measure dσ_loc = ψ(ν) dσ defined by
∫_S dσ_loc = ∑_α∈ℕ^n_0∫_ℝ^* (α,λ,(2|α|+n)|λ|)ψ((2|α| + n)|λ|) dλ.
with ψ any smooth, even, compactly supported function in ℝ with an L^∞ norm at most 1.
The restriction operator, ℛ_S_loc and the extension operator, ℰ_S_loc with respect to the surface (S,dσ_loc) can be computed as ℛ_S_loc f = f̂|_S and
ℰ_S_loc ()(x,t,s) = 1/(2 π)^2∑_α∈ℕ^n_0∫_ℝ^* e^-i (2|α|+n)|λ| s e^-i λ t (α,λ,ν)Φ_α^λ (x ) dλ.
§.§ Restriction theorem:
We are in a position to prove Theorem <ref>. Before proceeding to prove we need to observe the following:
Let ϕ∈𝒮(ℝ^n) and λ∈ℝ^*, then for all 1 ≤ p ≤ 2,
P_k(λ)ϕ_L^p'(ℝ^n)≤ C |λ|^n/2(1-2/p')(2k + n)^n-1/2(1-2/p')ϕ_L^p(ℝ^n),
where p' is the conjugate exponent of p, i.e., 1/p + 1/p' = 1.
Since, P_k(λ)'s are orthogonal projections on L^2(ℝ^n), so we have
P_k(λ)ϕ_L^2(ℝ^n)≤ϕ_L^2(ℝ^n).
Using the relation (<ref>) and the L^1 - L^∞ estimate in the proof of Proposition 4.4.2 in <cit.>, we have
P_k(λ)ϕ_L^∞(ℝ^n)≤ |λ|^n/2 (2k + n)^n-1/2ϕ_L^1(ℝ^n).
This estimate can also be found in the proof of Proposition 1 in <cit.>. Thus, the Lemma <ref> follows by interpolating (<ref>) and (<ref>).
Proof of Theorem <ref> : By duality argument, it is enough to show that the boundedness of the operator ℰ_S_loc from L^2(S,dσ_loc) to L^∞_t(ℝ;L^q'_s(ℝ;L^p'_x(ℝ^n))). Equivalently, we show that the operator ℰ_S_loc(ℰ_S_loc)^* is bounded from L^1_t(ℝ;L^q_s(ℝ;L^p_x(ℝ^n))) to L^∞_t(ℝ;L^q'_s(ℝ;L^p'_x(ℝ^n))), where 1/p + 1/p' = 1 and 1/q + 1/q' = 1.
Let f ∈𝒮(ℝ^n+2). From (<ref>) and (<ref>), we have
ℰ_S_loc (ℰ_S_loc)^* f(x,t,s)
= 1/(2 π)^2∑_α∈ℕ^n_0∫_ℝ^* e^-i (2|α| + n)|λ| s e^-i λ tf̂(α,λ, (2|α|+n)|λ|)Φ_α^λ (x ) ψ((2|α| + n)|λ|) dλ
= 1/(2 π)^2∑_α∈ℕ^n_01/2|α| + n∫_ℝ^* e^-i |λ| s e^- i λ t/2|α| + nf̂(α,λ/2|α| + n, |λ|)Φ_α^λ/2|α| + n (x ) ψ(|λ|) dλ,
where the last term obtained by performing the change of variables (2|α| + n)λ↦λ in each integral. Using (<ref>), (<ref>) and writing a_k = 1/2k + n, we obtain
ℰ_S_loc (ℰ_S_loc)^*f(x,t,s) = 1/(2 π)^2∑_k = 0^∞1/2 k + n∑_±∫_0^∞ e^- i λ s e^∓ i a_k λ t P_k(a_k λ) f^± a_kλ, λ (x) ψ(λ) dλ
= C ∑_k = 0^∞∑_±1/2 k + nℱ_λ→ s( e^∓ i a_k λ t P_k(a_k λ) f^± a_kλ, λ (x) ψ_+(λ) ) ,
where ψ_+(λ) = ψ(λ) 1_λ > 0. For fixed t ∈ℝ, applying the Hausdorff-Young inequality on the right-hand side of (<ref>) with respect to s-variable, we get
ℰ_S_loc (ℰ_S_loc)^*f _L^q'_s≤ C ∑_k = 0^∞∑_±1/2 k + nψ_+(λ) e^∓ i a_k λ t P_k(a_k λ) f^± a_kλ,λ (x) _L^q_λ.
Now for any function g defined on ℝ^n+1 and for q' ≥ p' > 2, by Minkowski's integral inequality, gives
ℱ_λ→ s g_L_s^q' L^p'_x≤ℱ_λ→ s g_L^p'_xL_s^q'≤ C g_L^p'_xL_λ^q≤ C g_L_λ^q L^p'_x.
In view of (<ref>) and (<ref>), we deduce that
ℰ_S_loc (ℰ_S_loc)^*f _L^∞_t L^q'_s L^p'_x≤ C ∑_k = 0^∞∑_±1/2 k + nψ(λ) P_k(a_k λ) f^± a_kλ,λ (x) _L^q_λ L^p'_x.
But, by Lemma <ref>, we have
P_k(a_kλ)f^± a_kλ,λ_L^p'_x≤ C |a_kλ|^n/2(1-2/p')(2k + n)^n-1/2(1-2/p')ℱ_s → -λ f(·,·,s)_L^p_x L^1_t,
which implies that
ℰ_S_loc (ℰ_S_loc)^*f _L^∞_t L^q'_s L^p'_x ≤ C ∑_k = 0^∞1/(2 k + n)^1 + 1/2(1-2/p')ℱ_s → -λ f(·,·,s)_L^p_x L^1_tψ(λ) λ^n/2(1-2/p')_L^q_λ
≤ C ℱ_s → -λ f(·,·,s)_L^p_x L^1_tψ(λ) λ^n/2(1-2/p')_L^q_λ
≤ C ℱ_s → -λ f(·,·, s) _L^a_λ L^p_x L^1_tψ(λ) λ^n/2(1-2/p')_L^b_λ(ℝ),
where the last step is justified by an application of Hölder's inequality in (<ref>) with a ≥ 2, 1/a + 1/a' = 1 and 1/a + 1/b = 1/q.
Then, taking a' = q and applying the Hausdorff-Young inequality in λ- variable, we get
ℰ_S_loc (ℰ_S_loc)^*f _L^∞_t L^q'_s L^p'_x≤ C ψ(λ) λ^n/2(1-2/p')_L^b_λ(ℝ) f _L^q_s L^p_x L^1_t.
Thus, (<ref>) follows from (<ref>) by Minkowski's integral inequality for all 1 ≤ q ≤ p < 2.
□
We consider the surfaces
S_± = {(α,λ,ν) ∈ℕ^n_0×ℝ^*×ℝ : ν^2 = (2|α| + n) |λ| , ±ν >0 }.
to obtain Strichartz estimate for the wave equation (<ref>). The induced measure dσ_± by the projection π : ℕ_0^n ×ℝ^* ×ℝ→ℕ_0^n ×ℝ^* onto the first two factors, for the surfaces S_± is given by
∫_S_± dσ_± = ∑_α∈ℕ^n_0∫_ℝ^*(α,λ,±√((2|α|+n)|λ|)) dλ,
for any integrable function on S_±.
Arguing as in the proof of Theorem (<ref>), the restriction inequality (<ref>) can be archived for the surface S_0 = S_+∪ S_- endowed with the corresponding localized measure.
§ ANISOTROPIC STRICHARTZ ESTIMATES
We consider the following class of functions :
A function f ∈𝒮(ℝ^n+1) is said to be frequency localized in a ball B(0,R), center at 0 of radius R if there exists a smooth, even function ψ supported in (-1,1) and equal to 1 near 0 such that
f = ψ(- R^-2 G ) g,
for some g ∈𝒮(ℝ^n+1), which equivalent to saying that for all (α,λ) ∈ℕ^n_0×ℝ^*,
f̂(α,λ) = ψ(R^-2(|α| + n)|λ|) ĝ(α,λ).
Note that (<ref>) is defined using functional calculus for G. By construction it is clear that any function f ∈𝒮(ℝ^n+1) can be approximated by frequency localized functions in L^2 sense. Now we are in position to prove Theorem <ref>.
Proof of Theorem <ref>: First, suppose f ∈𝒮(ℝ^n+1) is frequency localized in the unit ball B(0,1), i.e., there exists a smooth, even function ψ supported in (-1,1) such that f̂(α,λ) = ψ((|α| + n)|λ|) ĝ(α,λ) for some g ∈𝒮(ℝ^n+1). Let = ĝ∘π|_S and the localized measure on S be dσ_loc = ψ dσ defined in (<ref>). In view of (<ref>) and (<ref>) we can write
e^- i s G f (x,t) = ℰ_S_loc ()(x,t,s).
By the restriction inequality (<ref>), we have for 2 < p ≤ q ≤∞
e^-i s G f_L_t^∞ L_s^q L_x^p≤ C _L^2(S,dσ_loc) = C f̂∘π|_S_L^2(S,dσ) = C f_L^2(ℝ^n+1),
where the last equality is obtained by (<ref>) and the Plancherel formula (<ref>).
Next, assume that f is frequency localized in the ball B(0,R). By (<ref>) one can check that the function f_R := f ∘δ_R^-1 is frequency localized in B(0,1) and hence applying (<ref>) we get
e^-i s G f_R(x,t)_L_t^∞ L_s^q L_x^p≤ C f_R_L^2(ℝ^n+1) = C R^n/2 + 1f_L^2(ℝ^n+1).
Again using (<ref>), we have e^- i s G f_R(x,t) = e^- i R^-2s G f (R^-1 x,R^-2 t), thus from (<ref>) we obtain
e^-i s G f_L_t^∞ L_s^q L_x^p = R^ - 2/q - n/pe^- i R^-2s G f (R^-1 x,R^-2 t)_L_t^∞ L_s^q L_x^p≤ C R^n + 2/2 - 2/q - n/pf_L^2(ℝ^n+1).
So, if f is frequency localized in the ball B(0,R), then
e^-i s G f_L_t^∞ L_s^q L_x^p≤ Cf_L^2(ℝ^n+1),
provided 2/q + n/p = n+2/2 and hence Theorem <ref> follows by density of frequency localized functions in L^2(ℝ^n + 1).
□
Proof of Theorem <ref> :
Let f, g ∈𝒮(ℝ^n+1) with G^-1/2g ∈ L^2(ℝ^n+1). Using (<ref>) and the inversion formula (<ref>), the unique solution of (<ref>) is given by
u(x,t,s) = ∑_±1/2π∫_ℝ^* e^-i λ t∑_α∈ℕ^n e^∓ i s √((2|α|+n)|λ|)φ_±(α,λ)Φ_α^λ(x) dλ
where φ_± = 1/2(f̂∓ i G^-1/2g).
Let the surface S_0 = S_+∪ S_- endowed with the measure dσ_±, where S_±, dσ_± are defined in Remark <ref> and = φ_±∘π|_S_± on each sheet.
Assume that φ± are frequency localized in B(0,1). Proceeding as in Theorem <ref> for the surface (S_0,dσ_±) and using (<ref>), we obtain
u(x,t,s)_L_t^∞ L_s^q L_x^p≤ C _L^2(S,dσ_±) = φ_±_L^2(ℕ_0^n×ℝ^*) = φ_±_L^2(ℝ^n+1)
for 2 < p ≤ q ≤∞.
If φ_± are frequency localized in B(0,R), then the functions
φ_±,R = φ_±∘δ_R^-1
are frequency localized in B(0,1) and give rise to the solution
u_R(x,t,s) = u(R^-1x, R^-2t, R^-1s).
Thus, using (<ref>) we obtain
u(x,t,s)_L_t^∞ L_s^q L_x^p≤ C R^n + 2/2 - 1/q - n/pφ_±_L^2(ℝ^n+1).
By Plancherel formula, we have
φ_±^2_L^2(ℝ^n+1) = φ_+^2_L^2(ℝ^n+1) + φ_-^2_L^2(ℝ^n+1) = f^2_L^2(ℝ^n+1) + G^-1/2g^2_L^2(ℝ^n+1).
Hence we conclude that if f, g are frequency localized in B(0,R), then
u(x,t,s)_L_t^∞ L_s^q L_x^p≤ C(f_L^2(ℝ^n+1) + G^-1/2g_L^2(ℝ^n+1))
provided 1/q + n/p = n + 2/2. Thus Theorem <ref> follows by density argument.
□
§ THE INHOMOGENEOUS CASE
Now we consider the inhomogeneous problem:
i ∂_s u(x,t,s) - G u( x,t,s) = g(x,t,s), s ∈ℝ , (x,t)∈ℝ^n+1,
u(x,t,0) = f(x,t).
In this case, the solution is given by the Duhamel's formula:
u(x,t,s) = e^-i s G f(x,t) -i ∫_0^s e^- i (s-s') G g(x,t,s') ds'.
Let f ∈ L^2(ℝ^n+1) and g ∈ L^1_s(ℝ;L^2_x,t(ℝ^n+1)). If (p, q) lies in the admissible set A,
then the solution to the problem (<ref>), u(x,t,s) ∈ L^∞_t(ℝ;L^q_s(ℝ;L^p_x(ℝ^n))) and satisfies the estimate
u(x,t,s)_L_t^∞ L_s^q L_x^p≤ C(f_L^2(ℝ^n+1) + g_L^1_s(ℝ;L^2_x,t(ℝ^n+1))).
Let v(x,t,s) = i ∫_0^s e^- i (s-s') G g(x,t,s') ds'. Clearly we have
v(·,·,·)_L^∞_tL^q_s L^p_x≤∫_ℝe^- i (·) G e^i s' G g(·,·,s')_L^∞_tL^q_s L^p_x ds'.
First assume that, for all s', g(·,·,s') is frequency localized in unit ball B(0,1) in ℝ^n+1. For each s', using (<ref>) and the unitarity of e^i s' G, (<ref>) yields
v_L^∞_tL^q_s L^p_x≤ C ∫_ℝ e^i s' G g(·,·,s')_L^2(ℝ^n+1) ds' = C ∫_ℝ g(·,·,s')_L^2(ℝ^n+1) ds'.
Now assume, for all s, g(·,·,s) is frequency localized in B(0,R). Letting
g_R = R^-2 g(·,·,R^-2s)∘δ_R^-1 v_R(x,s,t) = i ∫_0^s e^- i (s-s') G g_R(x,t,s') ds'
we find that g_R(·,·,s) is frequency localized in ball B(0,1) for all s and v_R(x,t,s) =
v(R^-1x,R^-2t,R^-2s). Applying (<ref>) to g_R and using v_R_L^∞_tL^q_s L^p_x = R^2/q + n/pv_L^∞_tL^q_s L^p_x with g_R_L^1(ℝ;L^2(ℝ^n+1)) = R^n/2 + 1g_L^1(ℝ;L^2(ℝ^n+1)), we obtain
v_L_t^∞ L_s^q L_x^p≤ C R^n + 2/2 - 2/q - n/pg_L^1_s(ℝ;L^2_x,t(ℝ^n+1)).
Taking 2/q + n/p = n+2/2 and using density of frequency localized functions in L^1_s(ℝ;L^2_x,t(ℝ^n+1)), (<ref>) turns out to be
v_L_t^∞ L_s^q L_x^p≤ Cg_L^1_s(ℝ;L^2_x,t(ℝ^n+1)),
holds for all g ∈ L^1(ℝ;L^2(ℝ^n+1)). Combining the estimate for the first term in (<ref>) from Theorem <ref> together with (<ref>), we get (<ref>).
Similarly, for the inhomogeneous Grushin wave equation
∂_s^2 u (x,t,s) + Gu(x,t,s) = h(x,t,s) s ∈ℝ , (x,t)∈ℝ^n+1,
u (x,t,0) = f(x,t) ∂_su(x,t,0) = g(x,t),
we obtain the following anisotropic Strichartz estimate:
Let f ∈ L^2(ℝ^n+1), G^-1/2g ∈ L^2(ℝ^n+1) and G^-1/2h ∈ L^1_s(ℝ;L^2_x,t(ℝ^n+1)). If (p, q) lies in the admissible set
A_w, then the solution u(x,t,s) of the IVP (<ref>) is in L^∞_t(ℝ;L^q_s(ℝ;L^p_x(ℝ^n))) and satisfies the estimate:
u(x,t,s)_L_t^∞ L_s^q L_x^p≤ C(f_L^2(ℝ^n+1) + G^-1/2g_L^2(ℝ^n+1) + G^-1/2h_L^1_s(ℝ;L^2_x,t(ℝ^n+1))).
§ THE CASE P = 2 AND 1 ≤ Q ≤ 2 IN THEOREM <REF>
The restriction inequality (<ref>) also holds for n = 1, p = 2 and 1 ≤ q ≤ 2.
Note that for n = 1,
ℛ_S_loc f^2_L^2(S,dσ_loc) = 1/(2π)^2∑_±∑_k=0^∞∫_0^∞1/2k + 1P_k(± a_k λ) f^± a_kλ,λ^2_L^2(ℝ)ψ(λ) dλ.
Consider the Hilbert space L^2(ℕ_0 ×ℝ^+ ; L^2(ℝ)), with respect to the inner product ⟨α̃,β̃⟩^' = ∑_k=0^∞∫_ℝ^+⟨α̃(k,λ),β̃(k,λ)⟩ψ(λ)dλ, for all α̃, β̃∈ L^2(ℕ_0 ×ℝ^+ ; L^2(ℝ)), where ℝ_+ denote the set of all positive reals. In view of (<ref>) it is enough to prove that the operator T defined on 𝒮(ℝ^3) by
T f = 1/(2k + 1)^1/2 P_k( a_k λ) f^ a_kλ,λ,
is bounded from L^1_t(ℝ;L^q_s(ℝ;L^2_x(ℝ^n))) into L^2(ℕ_0 ×ℝ^+ ; L^2(ℝ)) or equivalently that its adjoint T^* is bounded from L^2(ℕ_0 ×ℝ^+ ; L^2(ℝ)) into L^∞_t(ℝ;L^q'_s(ℝ;L^p'_x(ℝ^n))) to obtain (<ref>).
For α̃∈ L^2(ℕ_0 ×ℝ^+ ; L^2(ℝ)), the operator T^* can be computed to be
T^*(α̃)(x,t,s) = ∑_k=0^∞∫_ℝ^+1/(2k + 1)^1/2 e^- i a_k λ t e^- i |λ| s P_k(a_k λ) (α̃(k,λ))(x) ψ(λ) dλ.
Using Minkowski's inequality together with the Hausdorff-Young inequality (see (<ref>)), for any fixed t ∈ℝ, we have
T^*(α̃)(·,t,·)_L_s^q'L^2_x≤ C g_L^q_λ L^2_x,
where g(x,λ) = ψ(λ) ∑_k=0^∞1/(2k + 1)^1/2 P_k(a_k λ) (α̃(k,λ))(x) .
Now
g(·,λ)^2_L^2(ℝ) =ψ(λ)^2 ∑_k,l ≥ 01/(2k + 1)^1/2 (2l + 1)^1/2⟨ P_k(a_k λ) α̃(k,λ),P_l(a_k λ) α̃(l,λ)⟩
≤ψ(λ)^2 ∑_k,l ≥ 0α̃(k,λ)_L^2(ℝ)α̃(l,λ)_L^2(ℝ)/(2k + 1)^1/2 (2l + 1)^1/2 |⟨Φ_k^a_k λ ,Φ_l^a_l λ⟩|.
The assymptotic behavior of the function Φ_k(x/√(2k + 1)) is roughly similar to the function
1/√(2)(2k+1)^1/2, if |x| < 2k + 1,
0, otherwise,.
(see Remark 2.6 of <cit.>), using this one can verify that 1/(2k + 1)^1/2(2l + 1)^1/2 |⟨Φ_k^1/2k +1,Φ_k^1/2l +1⟩|≤C/max{k,l} +1. Thus, (<ref>) turns out to be
g(·,λ)^2_L^2(ℝ)≤ C ψ(λ)^2 ∑_k ≤ lα̃(k,λ)_L^2(ℝ)(1/l+1∑_k=0^l+1α̃(k,λ)_L^2(ℝ)).
By Hardy's inequality, we get
g(·,λ)_L^2(ℝ)≤ C ψ(λ) (∑_k = 0^∞α̃(k,λ)^2_L^2(ℝ))^1/2.
Further, applying Hölder's inequality, we have
g_L^q_λ L^2_x≤ C ψ(λ)^1/2_L_λ^2-q/2q(ℝ^+)α̃_L^2(ℕ_0 ×ℝ^+).
The assymptotic behavior of the Hermite function Φ_k plays a decisive role (see (<ref>)) in the proof of Proposition <ref>.
We could not find such assymptotic behavior of the higher dimensional Hermite functions (n ≥ 2). But we prove the restriction inequality (<ref>) for n ≥ 2 and p = 2 for the radial functions defined below. A function f on ℝ^n+2 ( ℝ^n+1) is said to be radial if f(x,t,s) = f(|x|,t,s) ( f(x,t) = f(|x|,t)) for all x ∈ℝ^n+2 t, s ∈ℝ. If f is radial on ℝ^n+2 then f^λ,ν is radial on ℝ^n for any λ∈ℝ^* and ν∈ℝ.
Thus by Corollary 3.4.1 in <cit.> and the relation (<ref>), for all k ∈ℕ_0 we get
P_2k +1(f^λ,ν) = 0 P_2k(λ)(f^λ,ν) = R_2k(f^λ,ν) L^n/2 - 1_k(|λ||x|^2) e^-|λ|/2|x|^2,
where
R_2k(f^λ,ν) = Γ(k+1)/Γ(k + n/2) |λ|^n/2∫_ℝ^n f^λ,ν(x)L^n/2 - 1_k(|λ||x|^2) e^-|λ|/2|x|^2 dx,
and L^δ_k denote the Laguerre polynomials of type δ (> -1) defined by L^δ_k(r) =1/k! e^r r^-δd^k/dx^k(e^-r r^k+δ) for r > 0.
If f ∈𝒮_rad(ℝ^n+2), the set of all radial Schwartz class functions on ℝ^n+2, then the restriction inequality (<ref>) holds for n ≥ 2, p = 2 and 2 ≤ q ≤∞.
Let f ∈𝒮_rad(R^n+2). To prove (<ref>) for n ≥ 2 and p = 2 (proceeding as in (<ref>) for n=1 case), it suffices to show
∑_k = 0^∞∫_0^∞ (|R(k,λ/4k + n,λ)|^2 + |R(k,-λ/4k + n,λ)|^2 ) λ^n/2ϕ(λ) dλ≤ C f^2_L_t^1L^q_sL^p_x,
where
R(k,λ,ν) = ( Γ(k+1)/Γ(k + n/2)(4k + n)^n/2 + 1)^1/2∫_ℝ^n f^λ,ν(x)L^n/2 - 1_k(|λ||x|^2) e^-|λ|/2|x|^2 dx.
Using the assymptotic behavior of Laguerre functions in the proof of Lemma 4.2 in <cit.> and proceeding as in the proof of Proposition <ref> with appropriate modifications, we get (<ref>) for radial Schwartz class functions on ℝ^n+1.
In view of Proposition <ref> the Theorems <ref>, <ref>, <ref>, <ref> holds for p=2, 2≤ q ≤∞, n=1. In higher dimensions (n ≥ 2) the Theorems <ref>, <ref>, <ref>, <ref> for p=2, 2≤ q ≤∞ can be obtained for radial functions in view of Proposition <ref> and the idea used in obtaining anisotropic Strichartz estimates.
§ ACKNOWLEDGMENTS
The first author wishes to thank the Ministry of Human Resource Development, India for the research fellowship and Indian Institute of Technology Guwahati, India for the support provided during the period of this work.
99
=17pt
Anh T. C. Anh, J. Lee and B. K. My, On the classification of solutions to an elliptic equation involving the Grushin operator, Complex Var. Elliptic Equ. 63, no. 5, 671-688 (2018).
Anker J. P. Anker and V. Pierfelice, Nonlinear Schrödinger equation on real hyperbolic spaces. Ann. Inst. H. Poincaré C Anal. Non Linéaire 26, 1853-1869 (2009).
Bah6 H. Bahouri, J. Y. Chemin and R. Danchin, Fourier Analysis and Applications to Nonlinear Partial Differential Equations, Grundl. Math. Wiss., vol. 343, Spinger, New York (2011).
Bahouri1 H. Bahouri, C. K. Fermanian, I. Gallagher, Dispersive estimates for the Schrödinger operator on step 2 stratified Lie groups, Anal. PDE 9, 545–574 (2016).
Bahouri H. Bahouri, P. Gérard, and C. J. Xu, Espaces de Besov et estimations de Strichartz généralisées surle groupe de Heisenberg, J. Anal. Math. 82, 93-118 (2000).
Bahouri-local H. Bahouri, D. Barilari, and I. Gallagher, Local Dispersive and Strichartz Estimates for the Schrödinger Operator on the Heisenberg Group, Commun.Math.Res. 9, no. 1, 1-35 (2023).
Gall H. Bahouri, D. Barilari, I. Gallagher, Strichartz Estimates and Fourier Restriction Theorems on the Heisenberg Group. J. Fourier Anal. Appl. 27, 21 (2021).
Baouendi M. Baouendi, Sur une classe dópérateurs elliptiques degénérés, Bull. Soc. Math. France 95 , 45-87 (1967).
Bour1 J. Bourgain, A remark on Schrödinger operators. Israel J. Math. 77, 1-16 (1992).
burgain J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and application tononlinear evolution equations I, Geom. and Funct. Anal. 3, 107-156 (1993).
Bour2 J. Bourgain, Refinements of Strichartz inequality and applications to 2D-NLS with critical nonlinearity, Int. Math. Res. Notices 5, 253-283 (1998).
Burq N. Burq, P. Gerard and N. Tzvetkov, Strichartz inequalities and the nonlinear Schrödinger equation on compact manifolds, Amer. J. Math. 126, 569-605, 2004.
Caze T. Cazenave, Equations de Schrödinger non linéaires en dimension deux, Proc. R Soc. Edinb. Sect. A 84, 327-346 (1979).
DM G. M. DallÁra and A. Martini, A robust approach to sharp multiplier theorems for Grushin operators, Trans. Amer. Math. Soc. 373, no. 11, 7533-7574 (2020).
jyoti J. Dziubanski and K. Jotsaroop, On Hardy and BMO spaces for Grushin operator. J. Fourier Anal. Appl.
22(4), 954-995 (2016).
Franchi B. Franchi, C. E. Gutiérrez, R. L. Wheeden, Weighted Sobolev-Poincaré inequalities for Grushin type operators, Commun. Partial Differ. Equ. 19, 523-604 (1994).
Louise L. Gassot and M. Latocca, Probabilistic local well-posedness for the Schrödinger equation posed for the Grushin Laplacian, J. Funct. Anal. 283, no. 3, 109519 (2022).
Gerd P. Gérard and S. Grellier, The cubic Szegö equation, Ann. Sci. Éc. Norm. Supér. (4) 43, no. 5, 761-810 (2010).
GV J. Ginibre and G. Velo, Generalized Strichartz inequalities for the wave equations, J. Funct. Anal. 133, 50-68 (1995).
Grushin71 V. V. Grushin, On a class of elliptic operators degenerate on a submanifold, Math. USSR Sbornik 13, 155-185 (1971).
Grushin70 V. V. Grushin, On a class of hypoelliptic operators, Math. USSR Sbornik 12, 458-476 (1970).
Del M. D. Hierro, Dispersive and Strichartz estimates on H-type groups, Studia Math 169, 1-20 (2005).
Ivanovici O. Ivanovici, G. Lebeau and F. Planchon, Dispersion for the wave equation inside strictly convexdomains I: the Friedlander model case, Ann. of Math. (2) 180, 323-380 (2014).
JST K. Jotsaroop and S. Thangavelu, L^p estimates for the wave equation associated to the Grushin operator, Ann. Sc. Norm. Super. Pisa Cl. Sci. 13(5), no. 3, 775-794 (2014).
KeelT M. Keel and T. Tao, Endpoint Strichartz estimates, Am. J. Math. 120, 955-980 (1998).
Manli H. Liu and M. Song, A restriction theorem for Grushin operators Front. Math. China 11, 365-375 (2016).
Muller D. Müller, A restriction theorem for the Heisenberg group, Ann. Math. 131, 567-587 (1990).
RS R. S. Strichartz, Restrictions of Fourier transforms to quadratic surface and decay of solutions of wave equations, Duke Math. J. 44, 705-714 (1977).
Than S. Thangavelu, Lectures on Hermite and Laguerre expansions, Mathematical notes, Princeton Univ. Press, 42 (1993).
PT P. Tomas, A restriction theorem for the Fourier transform, Bull. Amer. Math. Soc. 81, 477-478 (1975).
|
http://arxiv.org/abs/2306.02610v1
|
20230605054611
|
Understanding the Planetary Formation and Evolution in Star Clusters(UPiC)-I: Evidence of Hot Giant Exoplanets Formation Timescales
|
[
"Yuan-Zhe Dai",
"Hui-Gen Liu",
"Jia-Yi Yang",
"Ji-Lin Zhou"
] |
astro-ph.EP
|
[
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.SR"
] |
Hui-Gen Liu
[email protected]
School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing, 210023, People's Republic of China
Key Laboratory of Modern Astronomy and Astrophysics, Ministry of Education, Nanjing, 210023, People's Republic of China
School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing, 210023, People's Republic of China
Key Laboratory of Modern Astronomy and Astrophysics, Ministry of Education, Nanjing, 210023, People's Republic of China
School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing, 210023, People's Republic of China
Key Laboratory of Modern Astronomy and Astrophysics, Ministry of Education, Nanjing, 210023, People's Republic of China
School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing, 210023, People's Republic of China
Key Laboratory of Modern Astronomy and Astrophysics, Ministry of Education, Nanjing, 210023, People's Republic of China
Planets in young star clusters could shed light on planet formation and evolution since star clusters can provide accurate age estimation. However, the number of transiting planets detected in clusters was only ∼ 30, too small for statistical analysis. Thanks to the unprecedented high-precision astrometric data provided by Gaia DR2 and Gaia DR3, many new Open Clusters(OCs) and comoving groups have been identified. The UPiC project aims to find observational evidence and interpret how planet form and evolve in cluster environments. In this work, we cross-match the stellar catalogs of new OCs and comoving groups with confirmed planets and candidates. We carefully remove false positives and obtain the biggest catalog of planets in star clusters up to now, which consists of 73 confirmed planets and 84 planet candidates. After age validation, we obtain the radius–age diagram of these planets/candidates. We find an increment of the fraction of Hot Jupiters(HJs) around 100 Myr and attribute the increment to the flyby-induced high-e migration in star clusters. An additional small bump of the fraction of HJs after 1 Gyr is detected, which indicates the formation timescale of HJ around field stars is much larger than that in star clusters. Thus, stellar environments play important roles in the formation of HJs. The hot-Neptune desert occurs around 100 Myr in our sample. A combination of photoevaporation and high-e migration may sculpt the hot-Neptune desert in clusters.
§ INTRODUCTION
Open Clusters(OCs) in the Milky Way are the collection of stars formed from the same molecular cloud and gravitationally bound together, Thus, sharing similar specific characteristics, e.g. age, distance, reddening, metal abundance, etc. OCs provide an ideal laboratory for studying star formation and evolution. Recent studies based on Kepler data show that nearly 50% of stars host planets <cit.>. Since most stars form in clusters <cit.>, many exoplanets are formed in cluster environments. Then the majority of stars will eventually become field stars as clusters are dissociated. Detecting exoplanets in OCs can provide an ideal sample for studying planet formation and evolution.
The first planet in OCs, ϵ tau b, was detected by <cit.> via Radial Velocity. Kepler-66b and Kepler-67b are the first cluster planets discovered by transit <cit.>. Thanks to Kepler/K2 and TESS, tens of planets in clusters have been discovered, and the number is growing. There are several programs focusing on planets in star clusters, especially young exoplanets. Zodiacal Exoplanets In Time (ZEIT) collaboration uses K2 data to monitor young open clusters and associations in the ecliptic plane and found planets in the Hyades, Praesepe, Upper Sco, and Taurus (). With the help of enable extensive follow-up observations, The TESS Hunt for Young and Maturing Exoplanets (THYME) collaboration has reported on planets in Upper Sco <cit.>, the Tuc-Hor association <cit.>, the Ursa Major moving group <cit.>, and the Pisces Eridanus stream <cit.>. <cit.> begun a Cluster Difference Imaging Photometric Survey(CDIPS) to discover giant transiting planets with known ages, and to provide light curves suitable for studies in stellar astrophysics. use a PSF-based Approach to TESS High-quality data of star clusters (PATHOS) and find 90 planet candidates.
Hitherto, there are many surveys and projects focusing on the young planets in clusters, but the number of reported planets in clusters is limited and is not enough to support statistical work, ∼ 30 according to <cit.>. To enlarge the number of planets in open clusters, both expanding the number of cluster members and identifying new clusters are available in Gaia Era.
The Gaia DR2/EDR3 catalog <cit.> presents more than 1.3 billion stars with unprecedented high-precision astrometric and photometric data, greatly improving the reliability of stellar membership determination and characterization of a large sample of stellar groups including star clusters, association, and other comoving groups. The recent analysis of Gaia Data has greatly expanded our knowledge of stellar groups <cit.>. In previous knowledge, a star cluster is a set of stars that are gravitationally bound to one another <cit.>. However, the recent discovery of stars in diffuse regions reminds us that we need to extend the original definition of star clusters. Because these stars in diffuse regions, i.e. not gravitational bound, are proved to have the same age as core cluster members through analyses of color-absolute magnitude diagrams <cit.>. On top of that, these stars in diffuse regions also share a similar distribution with core cluster members in stellar rotation periods <cit.> and chemical abundances <cit.>. Therefore, these stars in diffuse regions are probably coeval. In this series of papers, we extend the definition of open clusters to those stars in diffuse regions, i.e. comoving groups, diffuse streams, tidal tails, etc., not only the core cluster members. Thus, the number of stars in open clusters can be extremely enlarged.
There are many previous works that identified new OCs in the Milky Way using different algorithms, e.g. <cit.> applied the UPMASK algorithm to select cluster members and provided an updated catalog of 1229 OCs; <cit.> found 582 new cluster candidates located in the low galactic latitude area using an algorithm named Density-Based Spatial Clustering of Applications with Noise(DBSCAN), and etc. Using Hierarchical Density-Based Spatial Clustering of Applications with Noise <cit.>, <cit.> systematically clustered Gaia DR2 data within 1 kpc and identified 1640 populations containing a total of 288,370 stars. In their recent work, <cit.> (hereafter K2020), they extended the distance from 1 to 3 kpc and identified 8292 comoving groups consisting of 987,376 stars.
Utilizing the enlarged stellar population in clusters, the UPiC (Understanding Planetary Formation and Evolution in Star Clusters) project focuses on the planets in open clusters, including association and other comoving groups. We are trying to find evidence that how planets form and evolve in cluster environments. Lots of works have shown that both dynamical <cit.> and radiation <cit.> environments in clusters can influence the planet's formation and evolution. As the initial work of UPiC, this paper collects the largest transiting planet sample in open clusters and aims to analyze the correlation between the planetary radius and cluster ages, which is crucial for planet formation timescales. For example, <cit.> discovered the hot Neptune desert in planetary mass-period and radius-period distribution. The boundaries of the desert can be explained by photoevaporation and high-e migration <cit.>. These two mechanisms have different timescales. Thus, the age of the planets can distinguish different mechanisms and help us understand the time-evolution of planet radius.
This paper is arranged as follows. In section <ref>, we describe methods, including data collection, age validation, and sample cut. In section <ref>, we use these refined data to study planet radius – age diagram and estimate the evolution of three different planet populations. In section <ref>, we discuss how the statistical results constrain the planet formation mechanisms, including high-e migration and photoevaporation. Last, we summarize and discuss our major conclusions in section <ref>.
§ CATALOG OF TRANSITING PLANET IN OCS
§.§ Data collection
There are many works that use Gaia data to identify new OCs. If we combine all the catalogs of star clusters, we can definitely get more OCs, so as the stars in OCs. However, data selection criteria and clustering algorithms vary in different works, which will increase the inhomogeneity of the combined catalog. Therefore, to maximize the number of stars in OCs and make the sample as homogeneous as possible, we adopt the catalog from K2020, the largest catalog of stars with age estimations in comoving groups.
K2020 identified 8292 comoving groups within 3 kpc and galactic latitude |b|< 30 ^∘ by applying the unsupervised machine learning algorithm HDBSCAN on Gaia DR2's 5D data. We use the stellar catalog of K2020 to cross-match with the host stars of confirmed transiting planets and planet candidates. In this section, we use planets/candidates from Kepler, K2, and TESS. Since we are concerned about the radius of planets, we do not consider planets detected via the Radial Velocity method. The following subsections will briefly introduce how we select the planets and cut the planet sample to exclude some observation biases.
cccccccccccc
0pt
1
Planets in clusters
Plname R_ p Period OName Group Gaia DR2 Age Validation T_ eff logg Stmass Flag
(R_⊕) (days) (Myr) (K) (M_⊙)
TOI 520.01 1.49^+0.66_-0.66 0.524 Group 95 5576476552334683520 30.2^+5.3_-4.5 NO 7450 4.34 1.66 TESS
TOI 626.01 19.74^+0.66_-0.66 4.40 Group 449 5617241426979996800 195^+80_-57 NO 8489 4.03 2.11 TESS
TOI 2453.01 3.02^+0.20_-0.20 4.44 Hyades Group 1004 3295485490907597696 646^+113_-96 NO 3609 4.73 0.50 TESS
TOI 2519.01 2.29^+0.20_-0.20 6.96 Columba Group 208 2924619634745251712 263^+109_-77 NO 4742 4.57 0.76 TESS
TOI 2640.01 7.39^+0.50_-0.50 0.911 IC 2602 Group 92 5404579488593432576 45^+14_-11 NO 2999 4.95 0.25 TESS
TOI 2646.01 8.10 0.313 NGC 2516 Group 613 5288535107223500928 145^+37_-30 NO 5202 4.53 0.88 TESS
TOI 2822.01 11.53^+0.72_-0.72 2.88 Group 5076 5597777288033556480 537^+94_-80 NO 6086 4.04 1.14 TESS
TOI 3077.01 13.67^+0.72_-0.72 6.36 Group 3176 5307513536932390272 186^+130_-77 NO 7689 4.06 1.80 TESS
TOI 3335.01 11.60^+0.70_-0.70 3.61 Group 550 5903623451060661504 214^+75_-55 NO 6071 4.04 1.13 TESS
TOI 1097.02 12.7^+1.0_-1.0 2.27 Group 1502 6637496339607744768 3020^+447_-390 NO 5568 3.89 0.98 TESS
Due to the space limitation, we only show the first 10 rows of the table. Here, “OName” shows the name of the parental cluster, “Group” shows the corresponding group number in K2020, “Flag” shows the name of the mission that detects the planets, and “Validation” shows whether this planet/candidate has the convincing age estimation (i.e. “YES”, “NO”, and “Excluded”). The machine-readable table can be obtained in the following link: https://github.com/astrodyz/UPiChttps://github.com/astrodyz/UPiC
§.§.§ K2020 cross-matching with confirmed exoplanets
The number of confirmed planets from the NASA exoplanets archive[https://exoplanetarchive.ipac.caltech.eduhttps://exoplanetarchive.ipac.caltech.edu] is 5347 up to now (05/2023). After cross-matching with K2020, we find 76 confirmed planets in clusters. To study the planet size-age distribution, we select 51 transiting planets with known planet radii.
The properties of planets and their host stars, e.g. planet radius, orbital periods, effective temperature, surface gravity, and stellar mass are adopted from the table of NASA exoplanets archive. We adopt the age of the host stars from K2020 temporarily, which is completed. The adopted ages from K2020 will be validated via comparison in subsection <ref>.
§.§.§ K2020 cross-matching with KOIs
The number of KOIs from Kepler DR25 <cit.> is 8445, including confirmed planets and candidates. After cross-matching with K2020, we find 98 KOIs in clusters. Since some of these KOIs may be False Positives, such as eclipsing binaries in the background of the targets, or physically bound to them which can mimic the photometric signal of a transiting planet. Then, we exclude the sources flagged as False Positive. There are 26 confirmed planets and 17 planet candidates in clusters for Kepler sources. Here, we use the Gaia-Kepler Stellar Properties Catalog <cit.> to update and obtain the accurate and precise properties of these 43 selected KOIs and their host stars. The age information is also adopted from K2020.
§.§.§ K2020 cross-matching with K2
For K2 sources, we use the catalog of K2 including both confirmed planets and candidates from the NASA exoplanet archive to cross-match with K2020. We get only 25 matching sources. After excluding 9 candidates flagged as “FALSE POSITIVE”, we obtain 9 confirmed planets and 7 planet candidates with planet radius measurements. The properties of planets and planets' host stars are taken from the NASA exoplanet archive. The age information is taken from K2020.
§.§.§ K2020 cross-matching with TOIs
Up to 05/2023, there are 6586 TOIs detected by TESS. After cross-matching with K2020, there are 116 TOIs left. There are many False Positives in TOIs <cit.>. Thus, we select the TOIs carefully. Firstly, we exclude some sources flagged as “FA”, “APC”, and “FP”, which means false alarm, ambiguous planet candidate, and false positive respectively. After exclusion, 68 TOIs are left. Besides, based on publicly available observational notes on ExoFOP[https://exofop.ipac.caltech.edu/tess/https://exofop.ipac.caltech.edu/tess/], we also remove some TOIs with comments like “centroid offset[The light from nearby eclipsing binaries within 1' may pollute the aperture and cause transit-like signals on the target light curve, especially in a crowded field like open clusters]”, “V-shaped”, “Likely eclipsing binary(EB)”, and “odd-even”. For example, the comment of TOI 1376.01 is “centroid offset on TIC 190743999 in spoc-s56”. So after removing these TOIs, we finally select 42 TOIs, i.e. 12 confirmed planets and 30 candidates. The properties of these 42 selected TOIs and their host stars are taken from the table of TOIs. The age information is taken from K2020.
§.§.§ other sources
K2020 focuses on the stars with galactic latitude |b|< 30 ^∘. Actually, there are many other clusters out of such range, and so do the planets in clusters. We add the planets and planet candidates in the PATHOS project to include more planets at higher galactic latitudes. Table 6 of PATHOS - IV <cit.>, provides 33 confirmed planets in clusters. Although 11 confirmed planets are repeated selected planets in section <ref>, 14 of them are with |b|> 30 ^∘, and 8 of them are uncross-matched with K2020.
Additionally, the PATHOS project has found 90 planet candidates in clusters, which are not included in TOI completely. After cross-matching with K2020, we get the age information of 40 planet candidates with |b|< 30 ^∘. Here, we do not include the candidates of the PATHOS project with |b|> 30 ^∘, because they do not have the age measurements. <cit.> provide false positive probabilities for the PATHOS candidates. We remove 8 sources with high false positive probabilities. Most of them are likely eclipsing binaries. Thus, we add 32 PATHOS candidates. The stellar properties are taken from the TESS Input Catalog v8.0 <cit.>, e.g. stellar mass, stellar effective temperature, and surface gravity.
§.§.§ The catalog of transiting planets in star clusters
After the cross-matching, we check the catalog of transiting planets in clusters and exclude the repeated planets. Finally, there are 73 confirmed planets and 84 planet candidates in 86 clusters. Table <ref> shows all these planets in 133 planetary systems. Planets detected by different missions are flagged as “Kepler”, “K2”, “TESS”, and “other”. Here, “other” means sources detected by other facilities, e.g. CoRoT <cit.> and ground-based telescopes.
This is the largest catalog of the planets and planet candidates in star clusters. Due to space constraints, we only list 10 sources in Table <ref>. The whole table can be downloaded from the web: https://github.com/astrodyz/UPiChttps://github.com/astrodyz/UPiC
§.§ Age validation
Since we focus on the age-size distribution of planets in clusters, the accuracy, and precision of age measurements are essential. In this section, we will compare the age measurement of stars from <cit.> (hereafter He2020), K2020, and the table of the NASA exoplanet archive to illustrate whether the age estimation in K2020 is robust.
<cit.> adopted a neural network called Auriga to robustly estimate the ages of the individual groups they identified. The uncertainty of log(Age) of comoving groups within 1 kpc in K2020 is similar to <cit.>, i.e. ∼ 0.15 dex. In He2022, they use the isochrone fitting to derive the ages of 886 nearby clusters and candidates within 1.2 kpc.
Although <cit.> discuss the contamination and demonstrate that the vast majority of the age are well consistent with the results of isochrone fitting, the age estimation in K2020 may still have some systematic biases compared to He2022. On top of that, the difference in identifying clusters may also influence the final age estimation. K2020 used unsupervised machine learning HDBSCAN to identify clusters, while He2022 used DBSCAN. HDBSCAN has a better performance on the data with different density structures than DBSCAN, i.e. prefer to reveal more fine structures. Therefore, both the age estimation and the cluster membership identification will lead to systematic biases.
Then, we cross-matched the catalog of K2020 and He2022 to present whether the bias of age estimation in K2020 is non-negligible compared to other age sources. In panel (a) of Figure <ref>, red hollow dots are the 36 planets/candidates in 22 clusters that both have the age estimation in K2020 and He2022. Only one planet candidate has a large difference of age estimation, out of the 3 sigma limit, i.e. the PATHOS 64 in King-6. Therefore, we assume that the majority of the age estimation in K2020 is relatively robust. To validate the ages of this cluster, we refer to the ages from other previous works. In K2020, the estimation of the age of King-6 is 204^+91_-63 Myr which is consistent with the previous result of <cit.>(250±50 Myr), while in He2022, the age of King-6 is 44 Myr without uncertainty. Thus, we adopt 204^+91_-63 Myr as the final age of King-6.
Additionally, we also compare the age of the K2020 and the NASA exoplanet archives. In panel (b) of Figure <ref>, there are 29 planet host stars(44 planets) in our catalog with both ages from K2020 and NASA exoplanet archives. Eleven host stars are out of the 3 sigma limit in the age measurement, i.e. CoRoT-22, HATS-47, HD 110113, KELT-20, Kepler-1062, Kepler-1118, Kepler-1502, Kepler-411, Kepler-968, TOI-1937 A, and TOI-4145 A. All the ages of these planets' host stars from NASA exoplanet archives are much larger than the ages from K2020. E.g. the age of CoRoT-22 is 3.3±2.0 Gyr in <cit.>, while in K2020 is 339± 100 Myr; the age of HATS-47 in <cit.> is 8.10^+2.90_-4.30 Gyr, while in K2020 is 589^+243_-171 Myr; the age HD 110113 <cit.> is 4.0± 0.5 Gyr, while in K2020 is 645^+245_-178 Myr.
Because the individual age estimation of these stars depends on the models and methods, which are inhomogeneous in the NASA exoplanet archive. Strictly, we use different ways to validate the ages of the eleven host stars.
Firstly, some stars may have several(≥ 3) age measurements, which we can evaluate through majority voting. For instance, the age estimation of KELT-20 is ≤0.6 Gyr according to <cit.>. However, according to <cit.>, KELT-20 is
200 ^+100_-50 Myr, which is consistent with K2020's result, i.e. 166^+58_-43 Myr. The age estimation of Kepler-1118 from <cit.>(4.07 Gyr) is nearly ten times of that in K2020(490^+155_-118 Myr). We strengthen that set the age prior to 1-15 Gyr, which means the stellar age in their catalog is artificially larger than 1 Gyr. As the host cluster of Kepler-1118, NGC 6866 has the age of 705± 140 Myr from <cit.>). Therefore, we adopt the age in K2020 for KELT-20 and Kepler-1118.
Secondly, if stars do not have independent and consistent age measurements, we estimate the age via the gyrochronological relation. Kepler-411 is a special case that three age measurements are significantly different. <cit.> use gyrochronological relation (<cit.>) and estimate an age of 212 ± 31 Myr, K2020 estimate an age of 794^+302_-219 Myr, and <cit.> estimate an age of 2.69^+2.67_-1.10 Gyr.) To validate the age of Kepler-411, we adopt the new gyrochronological relation, which includes the empirical mass dependence of the rotational coupling timescale, developed by <cit.>. The stellar rotation period of Kepler 411, 10.4 days, is taken from <cit.>. Then, we estimate an age of 770 Myr for Kepler 411 system which is consistent with the age estimation in K2020. Thus, we adopt 794^+302_-219 Myr as the age of Kepler-411.
The same with Kepler-411, we use the rotating period and stellar mass, and obtain the ages of HATS-47, HD 110113, Kepler-1062, and Kepler 968, through the new gyrochronological relation, i.e. < 0.1 Gyr, ∼ 3 Gyr, ∼ 1.3 Gyr, and ∼ 0.7 Gyr, respectively. These age estimations are significantly different from K2020 (i.e. 589^+243_-171 Myr, 645^+245_-178 Myr, 21^+4_-4 Myr, and 181^+52_-41Myr, respectively). We speculate HATS-47HD 110113, Kepler-1062, and Kepler 968 may be the contaminating stars in star cluster identification. They may be field stars having similar kinematic properties compare to the comoving stellar groups in their proximity coincidently. Therefore, we exclude these three potential contaminating sources in cluster identification. We also remove HATS-47 because it does not have a convincing age measurement.
Thirdly, for CoRoT-22, Kepler-1502, TOI-1937 A b, and TOI-4145 A b which have inconsistent age measurements and lack of stellar rotation measurements, we can hardly validate their age. Additionally, <cit.> suspects that TOI-1937 A and TOI-4145 A may be the field star because of the poorly constrained cluster membership identification. Therefore, we directly exclude these four systems.
To sum up, we validate the age measurement of 70 planets/candidates in star clusters, obtain more convinced ages of three host stars via either literature or the new gyrochronological relation, and exclude eight planetary systems without convincing age estimations. If we assume those eight host stars are field stars, the contamination rate of our catalog is about 6%, which is consistent with that in <cit.>, i.e., 5%-10%.
§.§ Sample cut
In section <ref>, we obtain 63 planets and 84 planet candidates in star clusters with relatively robust age estimation. We aim to obtain the planet radius evolution, i.e. planet radius–age distribution. The accuracy of planet radius and age measurement will significantly influence our results. Therefore, we need to do the sample cut to minimize the influence of observational biases.
Here, we list the steps of sample cut in Table <ref>. Without the mass measurement, we can hardly determine whether the planet candidates are planets or brown dwarfs. Planet candidates with large radii are unlikely planets. Thus, we exclude 23 planets/candidates, with R_ p > 2.5 R_ J (the same criteria described in <cit.>). Brown dwarfs with larger masses can induce the motion of the photon center. <cit.> suggest that stars with high re-normalized unit-weight error (RUWE>1.4), are likely to be binaries. Thus, we adopt the criteria, RUWE<1.4, to exclude 9 planet candidates in potential binary systems.
Since planets with poor radius measurements may contaminate the results, we exclude 7 samples with relative radius errors larger than 50%. Due to the precision of TESS and the stellar noise of young stars, small planets detected by TESS and planets around young stars are less complete. Thus, we need to constrain the lower limits of the planet radius to exclude the bias of completeness. As shown in Appendix <ref>, planets with radius R_ p>2 R_⊕ and period P<20 days are considered to be detectable (SNR>7.1) via both Kepler and TESS. Thus, we cut the sample via R_ p>2 R_⊕ and P<20 days.
After the sample cut, there are 66 planets/candidates left. In the next section, we mainly use the sample to do the analysis.
ccc
0pt
2
Sample Cut of Planets/Candidates in clusters
Criterion Planets Planet candidates
The whole number 73 84
Age validation 63 84
R_ p < 2.5 R_ J 62 62
RUWE < 1.4 62 53
σ_ R_ p/R_ p< 0.5 58 50
P < 20 days 37 44
R_ p>2 R_⊕ 30 36
§ PLANET RADIUS–AGE DISTRIBUTION
§.§ Planet Radius–Age Diagram
Figure <ref> shows the planetary size – age distribution of 37 planets and 44 planet candidates in star clusters(15 planets/planet candidates with R_ p<2 R_⊕ and 66 planets/planet candidates with 2 R_⊕< R_ p<2.5 R_ J).
Here, we classify planets into three groups by size for the sake of simplicity:
* Sub-Neptunes, i.e. planets of 2 R_⊕< R_ p < 4 R_⊕,
* Sub-Jupiters, i.e. planets of 4 R_⊕ < R_ p < 8 R_⊕,
* Jovian planets, i.e. planets of 8 R_⊕ < R_ p < 2.5 R_ J.
There are only 5 Jovian planets younger than 100 Myr, while dozens beyond 100 Myr. So, it seems that there is a gap in the planet radius–age diagram for the Jovian planets younger than 100 Myr. Additionally, there may be another gap for Sub-Jupiters with ages between 50–200 Myr. Before 50 Myrs, there are several Sub-Jupiters, while between 50–200 Myr, the number of Sub-Jupiters declines to nearly none. On top of that, there are nearly no Sub-Neptunes between 50–100 Myr.
Therefore, it seems that all of the planets disappear between 50–100 Myr. However, due to the small number of planets/candidates(i.e. the large statistical error), whether the gap is real can not be easily demonstrated. In order to avoid observational bias, in the next subsection, we will take into the age error and the radius error of the planets to obtain the time-dependent relation for the proportion (instead of the number) of different-sized planets in star clusters.
§.§ The Evolution of Planet Radius
To investigate the time-dependent relation of different-sized planets in star clusters. We defined the proportions of planets with different sizes and ages. To determine the proportions and the uncertainties, we randomize the ages and radii of all the planets 100,000 times, assuming the Gaussian distribution. Then, we can obtain the proportions of each time, denoted as f_ i, via the formula:
f_ i = N_ i/N_ SubN+N_ SubJ+N_ J,
where N_i is the number of planets in star clusters with different sizes, i.e. N_ SubN, N_ SubJ or N_ J corresponding to Sub-Neptunes, Sub-Jupiters, and Jovian planets in section <ref>, respectively. After calculating 100,000 times, we obtain the distribution of f_ i and adopt the lower limits, median values, and the upper limit, according to the 16, 50, and 84 percentiles of f_ i, respectively.
The results are as shown in Figure <ref>. Note the age is cut by ≤ 1 Gyr because most of the selected planets/candidates in star clusters are younger than 1 Gyr(Figure <ref>). Panel (a) and (b) are different in numbers of age bins, i.e. 4 and 8 age bins between 10 to 1000 Myr under the log scale, respectively. Both two panels show that the proportion of Jovian planets (red diamonds, f_ J) increases before 100 Myr, and then declines after 200 Myrs, i.e. a peak occurs between 100 and 200 Myr. The proportion of Sub-Jupiters (blue squares, f_ SubJ) declines around 100 Myr. The proportion of Sub-Neptunes (gray circles, f_ SubN) shows a clear increase after 100 Myr. Here, we also consider the Poisson error because of the small number of planets in each bin, which has similar features with Figure <ref>(See Figure <ref> in Appendix <ref>)
To extend the evolution of planets/candidates older than 1 Gyr, we calculate the proportion of planets both in star clusters and around field stars, with different sizes. Here, we add ∼ 871 confirmed planets with age measurements from the NASA Exoplanet Archive. These confirmed planets share the same cut with planets/candidates in star clusters, i.e. 2R_⊕<R_ p<2.5 R_ J, and P<20 days. Using the same estimating procedures as those in Figure <ref>, we obtained the proportion varying with age, as shown in Figure <ref>.
In the panel (a) and (b) of Figure <ref>, the proportion of Jovian planets (red diamonds, f_ J), Sub-Jupiters (blue squares, f_ SubJ), and Sub-Neptunes (gray circles, f_ SubN) show the similar time-dependent relation within 1 Gyr to that in Figure <ref>. I.e. f_ J reaches maximum between 100 and 200 Myr, f_ SubJ rapidly declines around 100 Myr, and f_ SubN increases after 100 Myr. Because Figure <ref> has a longer time span than Figure <ref>, there are more substructures in Figure <ref>. For example, all of panels in Figure <ref> show a tiny bump of f_ J around 2 Gyr, which is anti-correlated with f_ SubN, i.e. a small dip of f_ SubN around 2 Gyr. These two timescales, i.e. 100 Myr and 2 Gyr (gray shadow regions), may correspond to different planet formation environments (see discussion in <ref>). Because the majority of planets younger than 300 Myr are in star clusters, while most of the planets older than 1 Gyr are around field stars.
To illustrate that our results are robust, we should exclude the influence of some other stellar parameters. For example, some of the planet host stars are very hot, i.e. T_ eff > 7500 K, especially for the candidates detected by TESS. For main-sequence stars, hotter stars usually have larger stellar radii than cooler ones. Therefore, the transit method tends to find larger planet candidates around hotter stars. In panel (c), we add another criterion for planets/candidates in star clusters and around field stars, i.e. T_ eff< 7500 K. Although the proportion of Jovian planets f_ J around 200 Myr is smaller than that in panel (b) because of the additional sample cut, f_ J still continuously increases between 100 and 400 Myr, i.e., the peak moves backward to around 400 Myr. The proportion of f_ SubJ rapidly decreases around 100 Myr, then remain at ∼ 0.1 after 100 Myr, which is similar to that in panel (b). f_ SubN does not show an obvious increasing/decreasing after 100 Myr.
The widely used definition of Hot Jupiters(HJs) is Jupiter-sized planets within 10 days <cit.>. Here, in panel (d), as a comparison, we also show the results of the conventional hot planets within 10 days. For Jovian planets and sub-Jupiters, panel (d) shows similar results to panel (b). Therefore, in the following, we call the Jovian planets within 20 days as HJs for simplicity (if without additional annotation). For sub-Neptunes, the increasing tendency after 100 Myr is ambiguous.
In panel (e), we show the results of the time-dependent relation of planet radius for planets within 200 days. Because including some warm planets, the increment of f_ J around 100 Myr in panel (e) becomes less than that in panel (b). As the majority of the warm planets are sub-Neptunes, f_ SubN in panel (e) is systematically higher than that in panel (b) after 100 Myr. In turn, the f_ J in panel (e) is systematically lower than that in panel (b) after 100 Myr. The time-dependent relation of f_ SubJ in panel (e) is similar to other panels.
In Figure <ref>, we can not find an obvious increase trend of the proportion of sub-Neptunes within 1 Gyr as shown in Figure <ref>. Because it seems sensitive to the parameter cut.
To summarize this section, we obtain the time-dependent relation of planet radius for planets/candidates in star clusters and around both cluster members and field stars, i.e.
* The proportion of Jovian planets f_ J increases around 100 Myr and reaches maximum between 100 Myr and 200 Myr, which is mainly attributed to the HJs in star clusters. The tiny bump of f_ J around 2 Gyr is attributed to the HJs around field stars.
* The proportion of Sub-Jupiters f_ SubJ declines rapidly around 100 Myr, then remains at a low value. The declination of f_ SubJ is mainly attributed to the hot Sub-Jupiters in star clusters.
§ CONSTRAINTS OF HOT GIANT PLANETS FORMATION TIMESCALE
Based on the statistical results above, we try to explain or constrain the timescales of hot giant planets' formation mechanisms in star clusters. Here, the hot giant planets mean HJs and hot Sub-Jupiters (or hot Neptunes).
There are several formation scenarios of HJ, i.e. in-situ formation when disk mass is large, or ex-situ formation then undergoing disk migration or high-e migration. The time scales of the first two HJ formation scenarios are mainly limited by the lifetime of the gas disk, which is typically ∼10 Myr. Therefore, if the in-situ formation and disk migration are the dominant channels of HJ formation, the number of HJs will not significantly change after 10 Myrs. However, Figure <ref> and <ref> show that the proportion of HJs(f_ J) has an obvious increment around ∼ 100 Myrs, which is probably attributed to the high-e migration.
Note, we do not exclude the possibility of HJs forming through the in-situ formation and disk migration. However, with a lack of clusters younger than 10 Myr, we can hardly constrain the in-situ formation mechanism and the fraction of such planets.
In the following discussion, we mainly focus on the increment of f_ J and the rapid declination of f_ SubJ around 100 Myrs in star clusters. More specifically, in section <ref>, we estimate the timescales of flyby-induced high-e migrations in star clusters, using typical parameters. In section <ref>, we try to explain the tiny bump of HJs, as well as the small dip of sub-Neptune. The Hot-Neptune desert is also discussed in section <ref>.
§.§ Flybys induced high-e migration in open clusters within 200 Myr
Recently, several observation works have shown environments in clusters can influence planet formation and evolution (). In star clusters, especially dense clusters, close stellar flybys may occur frequently. A series of theoretical works have shown that the HJ formation can be triggered by stellar flybys in star clusters (). Similar to previous works, we consider hierarchical planet systems with both Jovian planets and an out companion (e.g. a cold giant planet, sub-stellar, or stellar companion). The high-e migrations of the Jovian planet induced by flybys can be described as follows. During a close flyby event, a flyby star exchanges the angular momentum with the outer companion and excites its eccentricity and inclinations. Consequently, the eccentricities of the Jovian planets will be highly excited through the von Zeipel–Lidov–Kozai mechanism <cit.>. Finally, tidal circularization leads to the inward migration of the Jovian planet.
There are three factors determining the formation timescale of HJs under flybys-induced high-e migration in star clusters, i.e. the timescale of the close flyby (τ_ flyby), the ZLK mechanism (τ_ ZKL), and the tidal circularization (τ_ tidal). <cit.> demonstrates that an effective stellar flyby means flyby with small periastron q, which can trigger the ZKL oscillation successfully, and subsequently excite the high-eccentricity of the innerJovian planet. Here, we combine the equation (17),(18), and (19) in <cit.>, to get an estimation of τ_ flyby, the timescale of an effective flyby, i.e.,
τ_ flyby = (10^3 pc^-3/n_*) (2 M_⊙/M_ tot) (50 AU/a_ out)(σ_*/1 km/s) Gyr,
where n_* is stellar density in clusters, M_ tot is the total mass of the hierarchical three-body system, a_ out is the semi-major axis of the outer companion, and σ_* is the velocity dispersion of star clusters. Here, we assume a solar-Jupiter system plus a solar-like companion, i.e. M_ tot∼ 2 M_⊙. Other parameter setting are as follows, n_*=10^4 stars pc^-3, a_ out = 50 AU, and σ_* = 1 km/s. The setting of σ_*=1 km/s and a_ out = 50 AU are the same as <cit.>. For stellar density n_*, some earlier works (e.g. ) assume that high stellar density can only exist in the high-mass clusters. However, the recent work <cit.> revealed that low-mass clusters share a similar stellar density around 10^4 pc^-3 as high-mass clusters (e.g. the central area of the Orion nebula cluster) at least in the early stage of cluster evolution. Therefore, we set the typical density of star clusters to 10^4 stars pc^-3. Then, the timescale of an effective flyby τ_ flyby is about 100 Myr,
The ZLK timescale(τ_ ZKL) can be estimated as <cit.>:
τ_ ZKL = P_ in(M_ tot,in/M_ out)(a_ out/a_ in)^3(1-e^2_ out)^3/2
where P_ in is the orbital period of the inner Jovian planet, M_ tot,in is the total mass of the Sun-Jupiter system, M_ out is the mass of outer companion, and the e_ out is the eccentricity of the outer companion. If we assume that the inner Jupiter form outside the water ice line (around 2.7 AU), i.e. a_ in≥ 2.7 AU and P_ in≥ 4.4 yr, the typical ZLK timescale τ_ ZKL is ≲ 0.3 Myr, which is much shorter than τ_ flyby.
According to Figure 2 in <cit.>, an effective flyby can successfully lead to the high-eccentric orbit of the inner planet, typically larger than 0.99. For HJs in star clusters, the median value of the orbital period is around 3 days, which corresponds to 0.04 AU around a Solar-like star. If we set the final semi-major axis <cit.> after tidal circularization of a Jovian planet as 0.04 AU, the periastron of the inner planet q_ in is about 0.02 au. According to Figure 1 or Equation 5 in <cit.>, the typical tidal dissipation timescale of such a system is ≲ 100 Myr. Therefore, the typical formation timescale of HJs (τ_ HJC) through flyby-induced high-e migration is ≲ 200 Myr under our parameter settings. Due to the uncertainty of the mass of outer companions, τ_ HJC may move backward to several hundreds of Myr.
<cit.> provides that the occurrence rate of HJs is 0.5-1%, which is ten times of estimation by <cit.>. Interestingly, if we adopt the new stellar density value in <cit.> (i.e. n_*=10^4 pc^-3), the observed HJs can be successfully explained by flyby-induced high eccentricity migration in star clusters. This may indicate that the flyby-induced high-e migration is the dominant formation scenario of HJ in star clusters. As described of the high-e migration, Jupiter-sized planets (i.e. warm-Jupiters and cold-Jupiters) may migrate inward and become HJs. Therefore it also results in the decline of warm-Jupiters and cold-Jupiters after ∼100 Myr. In Figure <ref>, we show the planetary radius–age distribution of 107 planets/candidates within 200 days in star clusters. There are 7 young Warm-Jupiters in clusters(WJ, 20 < P < 200 days, age < 100 Myr, green circles). However, there are no WJs in clusters with ages between 100 Myr and 1000 Myr. This may be another hint of the flyby-induced high-e migration in star clusters.
Note there are dozens of WJs with age measurements from the NASA exoplanet archive (20<P<200 days, 8R_⊕<R_ p<2.5 R_ J, blue diamonds in Figure <ref>). The majority of these WJs around field stars are older than 1 Gyr. The absence of WJs between 100–1000 Myr may indicate that WJs are hardly sustained in cluster environments, i.e. such relatively dense environments prefer the formation of HJs. However, due to the small number of warm plants in star clusters, we need more observation data.
§.§ High-e migration of HJs around field stars beyond 1 Gyr
In Figure <ref>, we find a tiny bump of proportion of Jovian planets f_ J around 2 Gyr, which is anti-correlated with f_ SubN, i.e. a small dip of f_ SubN around 2 Gyr.
High-e migration can explain the anti-correlation. During the inward migration of the Jovian planet, the inner planets can be ejected from the system due to the planet-planet interaction <cit.>. I.e. HJs after high-e migration are usually lonely <cit.>. Thus, the proportion of smaller planets, e.g. Sub-Neptunes and Super-Earths, have a declination.
Due to tidal dissipation during the high-e migration, the eccentricity of the HJs will decrease with time. Therefore, if the tiny bump in the proportion of Jovian planets f_ J around 2 Gyr is attributed to the HJs that form through high-e migration, we may also see a small bump in the eccentricity-age diagram around 2 Gyr.
To test this conjecture, in the following, we select 336 HJs both having eccentricity and age measurements from the NASA exoplanet archive. We constrain the period of the HJs to be shorter than 20 days and the relative uncertainty of planet radii is no more than 50%. Figure <ref> shows the eccentricity-age distribution of the 289 HJs. In Panel (a) red dots shows the median eccentricity of each age bin changing with age. The error bar is calculated according to the 16 and 84 percentiles of the eccentricities of each age bin. Because the majority of the HJs have low eccentricity, the median eccentricity of each age bin is nearly a zero constant, i.e. unchanging with age.
However, red dots have larger error bars in the older region. It seems that the number of HJs with high eccentricity increases beyond 1 Gyr. Therefore, we calculate the relative ratio of High-eccentricity HJs in each age bin, as shown in panel (b) of Figure <ref>. Similar to the previous analysis, this ratio is calculated with the assumption of the Gaussian distribution of age and eccentricity of each HJ. We find that both the ratio of HJs with e>0.1 and e>0.2 rapidly increase beyond 1 Gyrs, i.e. the difference between the two data points before and after 1 Gyr is 2 σ and 12 σ, respectively. After reaching the maximum of around 2 Gyr, the ratio of HJs with high-eccentricity declines with age due to the tidal dissipation as expected. Therefore, the eccentricity evolution also supports the high-e migration for these HJs older than 1 Gyr.
Because HJs around field stars are the dominant population beyond 1 Gyr, as shown in Figure <ref>, we can conclude that the bump of f_ J around 2 Gyr is likely due to the formation of HJs around field stars through high-e migration. Since the timescale of 2 Gyr is ten times longer than the formation timescale of HJs through flyby-induced high-e migration in star clusters (τ_ HJC≲ 200 Myr). We explain the large differences via the different environments of stellar density. The HJs form around field stars after 1 Gyr, which escape the cluster environments at the early stage (<100 Myr). Because the less dense dynamical environments lead to longer flyby timescale, the trigger time of high-e migration is probably much longer than that in cluster environments.
§.§ Formation of Hot Neptune desert around 100 Myr
Several previous works find the hot Neptune desert in planetary mass-period distribution and radius-period distribution, i.e. <cit.>. In Figure <ref>, we find that the proportion of the Sub-Jupiters within 20 days, f_ SubJ, rapidly declines around 100 Myr, then remains at the low value. The declination is correlated to the hot Neptune desert and may indicate the formation timescale of such desert.
However, the Sub-Jupiters classified via radius and period independently, are not in the hot-Neptune desert exactly. According to <cit.>, the borders of the desert are period-dependent. I.e. at large radii, the planet's radius decreases with increasing period, while at small radii, the radius increases with increasing period (the dashed lines in Figure <ref>).
Using the same region with <cit.>, we compare the time-dependent ratio of the number of planets inside and outside the hot Neptune desert, to constrain the formation timescale of the hot Neptune desert.
In Figure <ref>, we show 107 planets/candidates in star clusters and 1991 other confirmed planets around field stars in the radius–period plane. We divide the planets/candidates in star clusters within 20 days into two groups. One is planets younger than 100 Myr (red circles). The other is planets older than 100 Myr (green diamonds).
We calculate the ratio of the number of planets inside and outside the hot Neptune desert, i.e. N_ in/N_ out, for both the younger and older groups. Then, we use the Monte Carlo simulation to get a distribution of N_ in/N_ out under the assumption of the Gaussian distribution of planet radius, period, and age. The error of N_ in/N_ out is adopted according to the 16 and 84 percentiles of the ratio distribution. We find that the younger group have much higher N_ in/N_ out (0.80^+0.20_-0.19) than older groups (0.19^+0.03_-0.03) in nearly 3.0 σ confidence level. If we add a planet radius cut for detection completeness, i.e. R_ p >2 R_⊕ (horizontal shadow line), the result is similar. I.e. the younger group has higher N_ in/N_ out (0.67^+0.17_-0.13) than older group (0.18^+0.03_-0.01) in 3.67 σ confidence level. Alternatively, if we use 300 Myr to distinguish the younger and older group, the difference of N_ in/N_ out between the young and old group will decrease, which is smaller than the 100-Myr case. I.e. the difference between the median value of the young group's N_ in/N_ out (0.48^+0.07_-0.05), and old group's N_ in/N_ out (0.31^+0.04_-0.04). Therefore, we can conclude that the rapid declination of f_SubJ around 100 Myr corresponds to the formation of the hot-Neptune desert around 100 Myr.
<cit.> explains two boundaries of the hot-Neptune desert in a combination of photoevaporation and high-e migration. For the lower boundary, photoevaporation functions effectively in the first several hundred of million years and will trigger the atmospheric mass loss of Neptune-sized planets, which is consistent with the formation timescale we obtained. If the high-e migration sculpts the lower boundary, the formation timescale will be much larger than 100 Myr, because the hot-Neptunes usually experience longer tidal circularization (≳ 1 Gyr). We prefer that photoevaporation sculpts the lower boundary of the hot-Neptune desert around 100 Myr.
For the upper boundary, photoevaporation seems not a suitable explanation. Because several works show that the massive planets(M_p> 0.5 M_J) can resist photoevaporation even at extremely short periods <cit.>. Therefore, photoevaporation will predict a lower upper boundary(Figure 4 in <cit.>). However, in some specific cases, the rapid declination of the radius of giant planets may explain the upper boundary of the Hot-Neptune desert. E.g. <cit.> developed a model including radius inflation, photoevaporative mass loss, and Roche lobe overflow, which can trigger a runaway mass loss of a puff hot Saturn around 400 Myr. Because the desert already existed around 100 Myr, we do not consider such a mechanism as the dominant.
High-e migration can deliver Jovian planets from the outer region to the inner orbit. Then, the subsequent decay due to stellar tides will further sculpt the upper boundary. More specifically, both the tidal circularization timescale (planetary tides) and tidal decay timescale (stellar tides) decrease with increasing planetary radius or mass. As described in section <ref>, the typical formation timescale of HJs in clusters is ≲ 200 Myr, which is consistent with the formation timescale of the Hot-Neptune desert. Therefore, flyby-induced high-e migration could sculpt the upper boundary of the hot-Neptune desert in clusters.
The formation timescale of the Hot-Neptune desert in star clusters is around 100 Myr. When it comes to field stars, the formation timescale of the Hot-Neptune desert may move backward to several Gyr because of the relatively slow high-e migration for HJs around field stars. However, due to the limited sample of young planets around field stars, we do not find any evidence.
Because the scenario of flyby-induced high-e migration can not only explain the increment of f_ J around 100 Myr, but may also sculpt the upper boundary of the Hot-Neptune desert around 100 Myr. Therefore, we prefer to the scenario of flyby-induced high-e migration, i.e. a combination of photoevaporation and flyby-induced high-e migration could sculpt the hot-Neptune desert around 100 Myr.
§ CONCLUSION & DISCUSSION
Planets in young star clusters could help us understand the planet's formation and evolution because of the accurate age estimation. In section <ref>, we collect the largest catalog of 73 planets and 84 candidates in star clusters by cross-matching with K2020 and planets/candidates from the NASA exoplanet archive. We validate the age estimation of 70 planets/candidates in star clusters, obtain more convinced ages of three host stars via either literature or the new gyrochronological relation, and exclude eight planetary systems with no robust age estimations.
In section <ref>, we use this catalog to study the planet radius – age relation. The main statistical results are as follows:
* The proportion of Jovian planets f_ J increases around 100 Myr and reaches maximum between 100 Myr and 200 Myr, which is mainly attributed to the HJs in star clusters. The bump of f_ J around 2 Gyr is attributed to the HJs around field stars.
* The proportion of Sub-Jupiters f_ SubJ declines rapidly around 100 Myr, then remains at a low value. The declination of f_ SubJ is mainly attributed to the hot Sub-Jupiters in star clusters.
After discussing several possible scenarios to explain the results, we give two constraints on the hot giant exoplanet formation timescales in section <ref>:
* HJs likely form through flyby-induced high-e migration in star clusters within 200 Myr.
* A combination of photoevaporation and flyby-induced high-e migration in star clusters can sculpt the hot-Neptune desert around 100 Myr.
We find that flyby-induced high-e migration may be the dominant formation channel of HJs in star clusters. As described in section <ref>, those HJs in star clusters will accompany an outer companion, which is an effective angular momentum transmitter during a close flyby event. Therefore, we hope to discover some outer companions beyond these HJs with the Radial Velocity observations from ground-based telescopes and the astrometric data from the future data release of Gaia.
Different from HJs in star clusters, HJs around field stars may have much longer formation timescale (∼ 2 Gyr), which can be attributed to the different dynamical environments (section <ref>).
Note in this paper, we mainly focus on the ZLK mechanism to excite the high eccentricity of the inner Jovian planet. Actually, other mechanisms such as planet-planet scattering can trigger the high-eccentric orbit too. <cit.> demonstrates that a very small fraction of HJs can form from the fly-induced planet-planet scattering channel, i.e. ZLK mechanism may be the dominant scenario of eccentricity excitation. However, we could not exclude the possibility of planet-planet scattering. One way to distinguish these two mechanisms definitely is the stellar obliquity, the angle between a planet's orbital axis and its host star's spin axis. ZLK mechanism predicts a bimodal stellar obliquity distribution, concentrated at 40^∘ and 140^∘ <cit.>. While the planet-planet scattering after a convergent disk migration predicts a concentration of stellar obliquity around 90^∘ <cit.>.
Additionally, a hint from the absence of WJs around field stars between 100 Myr and 1000 Myr also supports the scenario of flyby-induced high-e migration (section <ref>). However, this absence may be due to the observation bias. For instance, TESS prefers to discover HJs instead of WJs, because of the relatively short time span.
In the future, with the extended mission of TESS, the Earth 2.0 mission <cit.>, the Chinese Space Station Telescope <cit.>, and PLATO <cit.>, we hope to detect more young planets both in star clusters and around field stars. The subsequent astrometry data from Gaia and the follow-up Radial-Velocity observation (including the Rossiter-McLaughlin effect) from ground-based telescopes can also provide more information about warm planets and even outer companions. A larger sample of planets in clusters will benefit us to test different formation scenarios of HJs, as well as hot-Neptune deserts.
We thank Prof. Dr. Bo Ma for helpful recommendations to improve the paper. This work is supported by the National Natural Science Foundation of China (grant Nos. 11973028, 11933001, 1803012, 12150009) and the National Key R&D Program of China (2019YFA0706601). We also acknowledge the science research grants from the China Manned Space Project with No. CMS-CSST-2021-B12 and CMS-CSST-2021-B09, as well as Civil Aerospace Technology Research Project (D050105).
astropy <cit.>,
matplotlib <cit.>,
pandas <cit.>.
§ SNR-AGE RELATION
In young clusters, stars are more active and may hide the transiting events, especially for small planets. Thus, the detection of small planets in young clusters is uncompleted. To study the selection effects, we try to derive the empirical relations between SNR and stellar age for planets of different sizes. Then, we can select suitable criteria of planet radius to cut our samples.
The calculation of SNR of a transiting planet is calculated as follows:
SNR = δ n^0.5/σ_*(t_ dur)
where δ=(R_p/R_*) is the transit depth of the star and n is the transit number. σ_* is the stellar photometric noise. The transit duration time t_ dur is given by:
t_ dur = PR_*√(1-e^2)/π a
where P is the planet's orbital period and a is the semi-major axis. In the calculation of SNR, we assume that host stars are solar-like (i.e. M_* = 1 M_⊙ and R_* = 1 R_⊙ ) and set P=20 days (because most of our select planets in star clusters are within 20 days). The same as the assumption of the <cit.> to assume that the stellar noise change with time as a simple power law distribution:
σ_*(t) = σ_ LC(t/t_ LC)^ind_ CDPP,
where we normalize the noise (σ_ LC) at the period in the long cadence, 1765.5 s (t_ LC). ind_ CDPP is the power law index. We use Combined Differential Photometric Precision <cit.> from Kepler DR25, which characterizes the noise level in Kepler lightcurves. Then, using the stellar kinematic age from <cit.>, we can obtain ind_ CDPP–age relation and SNR–age relation.
For a rough calculation, we assume that the stars observed by Kepler and TESS are similar, i.e. the stellar noise evolution is similar. The major difference in SNR of planets of different sizes lies in the observation time t_obs which determines the transit numbers. Here, the observation time of the Kepler stars is ∼ 1450 days and TESS is roughly two observation sectors, i.e. ∼ 54 days.
Figure <ref> shows the calculated SNR of planets changing with age. Red, orange, and black hollow dots and dashed lines present the results of planets of different sizes (i.e. 1 R_⊕, 2 R_⊕, and 2.5 R_⊕) for TESS. The purple hollow dots and dashed line show the result of planets of 1 R_⊕ for Kepler. Since the data from <cit.> does not provide the CDPP of stars younger than 300 Myr, we simply extend the relationship to 10 Myr through a log-linear exploration. The blue horizontal line is the SNR of 7.1 above which we consider TESS or Kepler can detect planets. To sum up, planets with a radius larger than 2 R_⊕ can be detected both by TESS and Kepler. Thus, in section <ref>, we focus on the planets with a radius larger than 2 R_⊕ to exclude the uncompleted detection due to the stellar noise of young stars.
§ POISSON ERROR
The 'standard' confidence interval for a Poisson parameter
Figure <ref> shows the time-dependent relation of the proportions of planets of different sizes, adopted Poisson error. The error bar, i.e. standard confidence interval related to a Poisson parameter, is calculated through the chi-square distribution. Panel (a) includes planets/candidates in star clusters, consistent with the results of Figure <ref>, although with more significant uncertainties. Panel (b) includes planets/candidates in star clusters and around field stars, consistent with the results of Figure <ref>. I.e. the f_J increases before 100 Myr and then decreases around 1-2 Gyr.
aasjournal
|
http://arxiv.org/abs/2306.03594v1
|
20230606113129
|
Emotional Talking Head Generation based on Memory-Sharing and Attention-Augmented Networks
|
[
"Jianrong Wang",
"Yaxin Zhao",
"Li Liu",
"Tianyi Xu",
"Qi Li",
"Sen Li"
] |
cs.CV
|
[
"cs.CV"
] |
Transversals via regularity
Yangyang ChengMathematical Institute,
University of Oxford, Oxford, UK. Email: [email protected] Cheng was supported by a PhD studentship of ERC Advanced Grant 883810. Katherine StadenSchool of Mathematics and Statistics, The Open University, Walton Hall, Milton Keynes, UK. Email: [email protected] Staden was supported by EPSRC Fellowship EP/V025953/1.
July 31, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================
Given an audio clip and a reference face image, the goal of the talking head generation is to generate a high-fidelity talking head video. Although some audio-driven methods of generating talking head videos have made some achievements in the past, most of them only focused on lip and audio synchronization and lack the ability to reproduce the facial expressions of the target person. To this end, we propose a talking head generation model consisting of a Memory-Sharing Emotion Feature extractor (MSEF) and an Attention-Augmented Translator based on U-net (AATU). Firstly, MSEF can extract implicit emotional auxiliary features from audio to estimate more accurate emotional face landmarks. Secondly, AATU acts as a translator between the estimated landmarks and the photo-realistic video frames. Extensive qualitative and quantitative experiments have shown the superiority of the proposed method to the previous works. Codes will be made publicly available.
Index Terms: Audio-driven Talking Head Generation, Emotion, Memory-sharing, U-net, Attention
§ INTRODUCTION
Audio-driven realistic talking head video generation plays a very important role in multiple applications, such as film making <cit.>, video bandwidth reduction <cit.>, virtual avatars animation <cit.> and video conference <cit.>, etc. According to the previous work <cit.>, an ideal realistic talking head video should satisfy the following requirements, i.e., (1) the identity needs to be consistent with the target person, (2) the lip movements need to be synchronized with the audio content, (3) the videos should have natural facial expressions and head movements.
In the literature, some previous works have focused on generating lip-synchronized talking head videos <cit.>, but they ignored the facial expressions modeling. In recent years, there has been some work on generating expression-controlled talking head videos. Blinking motions were added in <cit.> to improve the realism by synthesizing talking head videos, but the results were still unsatisfactory, i.e., the facial muscles were stiff. <cit.> relied on neutral video recordings of the target person to generate
emotional talking head videos, but the facial expressiveness of the generated results was still insufficient. <cit.> designed a model to generate a talking head video that is emotionally consistent with an emotional source video by accepting four inputs, namely an identity reference image, an audio clip, a predefined pose video and the emotional source video. However, the video-driven based approach is limited by bandwidth, storage space, etc., and is not applicable in some application cases, such as bandwidth-constrained video conferencing.
Based on the above research and analysis, our work is to design an audio-driven talking head generation model that accepts two inputs, i.e., an emotional audio clip and a reference facial image with the same emotion. The outputs are highly realistic videos of the target person. We believe that with the rapid development of photographic devices such as mobile phones and cameras, it should be easy to obtain such inputs. However, there are still two challenges in implementing such a model. Firstly, the facial pose of a person varies greatly across different emotional states. Secondly, rich facial expressions produce complex skin textures and facial shadows. To make the generated emotional talking head videos more realistic, we not only need to accurately predict the emotional facial landmarks, but also need to render the facial details during the regression from the landmarks to the images.
To solve these problems, we propose a new two-staged emotional talking head generation model. More precisely, in the first stage, because the emotional information in the audio is closely related to facial expressions, we explicitly extract the emotional features hidden in the audio as the auxiliary information. We train a Memory-Sharing Emotional Feature extractor (MSEF) in a supervised way, and propose a joint loss to change the optimization direction of the model to further improve the accuracy of the predicted landmarks. MSEF implicitly takes into account the relationship between different samples through the memory-sharing module with linear complexity, which is of great significance for extracting emotional features in audio. In the second stage, the predicted landmarks and the reference face image are fed into the Attention-Augmented Translator based on U-net (AATU) to generate photo-realistic talking head videos. AATU aims to focus on shallow details and important semantic features of the network simultaneously, reducing the loss of useful information and improving the model's performance, so that the output image can maintain more details such as skin texture and facial shadows of the target person.
To sum up, our contributions can be summarized as follows.
* We propose a novel model, a memory-sharing emotional feature extractor, to extract emotional features from audio signals. Using the extracted auxiliary features, the network can predict emotional face landmarks more accurately than previous works.
* An attention-augmented translator based on U-net is proposed to generate photo-realistic and emotional talking head video frames, e.g., skin texture and
facial shadows.
* Qualitative and quantitative experiments on the MEAD dataset show that the model achieves high-quality emotional talking head video generation, which is significantly superior to previous works.
§ RELETED WORK
In the literature, there are two general approaches to talking head video generation. One is the end-to-end mapping from audio to talking head video <cit.>, and the other is the generation of talking head video through intermediate features, such as landmarks <cit.> and 3DMM parameters <cit.>.
The large number of parameters in the end-to-end talking head generation model often leads to the model overfitting the training data. Furthermore, the
networks tend to be more concerned with lip and audio synchronization, making it difficult to generate face animations of various natural facial poses and expressions. To this end, Zhou et al. <cit.> first proposed a cascading model, i.e., first mapping audio to landmarks and then converting them to images, which reduced the influence of non-audio related information in the video such as the angle of the shot. However, they only focus on the lip motion of the image. Song et al. <cit.>, Thies et al. <cit.> and Zhang et al. <cit.> all regressed facial expression parameters of 3DMM model, but they all relied on the video of the target portrait, which is not applicable in most scenarios. Furthermore, the 3DMM parameter can only represent the geometry of a face and does not render a natural talking head video with high-quality skin textures.
Based on the current state of research on talking head generation, we propose a novel two-stage talking head generation model, which guarantees the expressiveness of facial emotions while rendering the detailed skin texture of the target person well.
§ METHOD
In this study, the model we proposed is shown in Figure <ref>, which mainly includes two stages. The first stage is to predict the lip-sync and emotional face landmarks from the audio and the reference face image. The second stage is an attention-augmented translator based on U-net, which takes our predicted landmarks and the reference face image as input to generate photo-realistic and emotional talking head video frames. In the following subsections, each module is described in detail.
§.§ Memory-Sharing Emotional Feature Extractor
In order to more accurately predict the face landmarks of the avatar, we extract the implied emotion auxiliary features from the audio signal through the MSEF module, the specific network structure is shown in Figure <ref>.
We encode MFCC features as 128-dimensional feature vectors (f) through MLP, and introduce two memory units with reference to the idea of <cit.>. The memory units are represented by linear layers, which can easily capture the global features in a single sample and can focus on the potential correlation between different samples. The emotional information in an audio clip is global features, and different audios may contain the same emotion. We believe that the memory-sharing units can explore the correlation between different samples and thus extract emotional features more accurately. Features that correspond to the same emotion with different audio, the treatment should be consistent. The specific formula is as follows:
f_e = f + g(softmax(g(f) · M_1) · M_2),
where g(·) represents the convolution operation, and the rest symbols are shown in Figure <ref>.
Then, we encode the emotional features into 8-dimensional feature vectors via an additional emotion classifier and introduce L_ec to supervise the training of this module. The loss function is formulated as follows:
L_ec = 1/N∑_i=1^N - [ y_i ln ŷ_i + (1 - y_i) ln(1 - ŷ_i)],
where y represents the real emotion label of audio and ŷ represents the predicted emotion category.
In addition, E_Lm and E_A are simple MLPs, encoding the landmark and MFCC into 512-dimension and 128-dimension feature vectors respectively. The Audio2Lm module is composed of LSTM and a full connection layer.
In order to consider emotion without losing the accuracy of lips, we designed a joint loss function. In addition to the L_ec mentioned above, we add L_landmark to the loss function to give the model the ability to regress face landmarks. At the same time, L_lip is added to make the model pay more attention to lips. The specific formulas are as follows:
L_landmark = 1/N∑_i=1^N (L_real - L_fake)^2,
L_joint = L_pca + α L_landmark + β L_lip + γ L_ec.
where the hyperparameter α, β and γ are the scaling factors, which we set to 10. L_real is the real face landmark and L_fake is the predicted face landmark. M_real is the real lip landmark, M_fake is the predicted lip landmark. L_pca and L_lip are calculated in a similar way to L_landmark, denoting the pca downscaling landmark and landmark of the lip region, respectively.
§.§ Attention-Augmented Translator based on U-net
In order to generate high-fidelity and emotional talking head video frames of the target person from the predicted landmarks, two challenges will be faced. Firstly, the photo-realistic talking head video frames need to pay attention to skin texture, and other details in order to better express emotions. Secondly, during the conversion from face landmarks to talking head video frames, a high degree of consistency with the target person's identity and a match to the predicted landmark facial contours and lip shape needs to be ensured.
In order to meet such challenges, we propose an attention-augmented translator based on U-net based on the MakeItTalk <cit.> framework. We propose AATU on this basis to further improve the quality of the generated video frames. As shown in Figure <ref>, we concatenate the predicted face landmarks with the reference face image by channel, and take it as the input of the encoder. The output of the decoder is photo-realistic and lip-sync talking head video frames. In the initial four layers of the encoder and decoder, we add a CBAM <cit.> module, respectively.
The CBAM consists of two sub-modules, spatial attention and channel attention, and implements a sequential attention structure from channel to space. We believe that in this task, spatial attention enables the neural network to pay more attention to the pixel areas in the image that play a determining role in facial expression and lip shape, while ignoring the unimportant areas. Channel attention is used to handle the distribution relationship of the feature map channels. Also, the distribution of attention over the two dimensions reinforced the impact of the attention mechanism on model performance. The shallow layer of the U-net structure effectively avoids the loss of spatial information caused by the fully connected layer, allowing the network to pay attention to skin texture and other details.
We use L1 loss as the loss function to supervise and train our network, and in order to enhance the quality of the generated talking head video frames, additional perceptual loss <cit.> is added. The specific formula are as follows:
L1 = 1/N∑_i=1^N ||f - f̂||,
L_per = 1/N∑_i=1^N ||ϕ_i(I) - ϕ_i(Î)||,
where I represents the real image, Î represents the generated image. ϕ_i represents the layer i feature extraction layer of VGG-19 network <cit.>.
§ EXPERIMENT AND RESULT
§.§ Implement Details
1) Dateset and Setup. The dataset we use to evaluate the model is the same as EVP <cit.>, i.e., the MEAD dataset <cit.>. Other datasets, such as LRW <cit.> and VoxCeleb <cit.>, are not suitable in our case, since they lack sentiment labels. And the CREMA-D <cit.> dataset does not distinguish much between various types of emotions. MEAD is a large-scale, high-quality emotional audio-visual dataset, which consists of 60 actors, including 8 basic emotions and 3 different emotional-intensity talking head videos. The training-test set is divided into a ratio of 8:2. We convert all talking head videos to 25fps and set the audio sample rate to 16KHz. For video 6 streams, we use Dlib to detect the face landmark of each frame. For audio streams, We extract MFCC at the window size of 25ms and hop size of 10ms. Our network is implemented using PyTorch. We use Adam optimizer, and the initial learning rate is set to 1e-4. We use the annealing strategy to adjust the learning rate through exponential decay.
2) Evaluation Metrics. In order to quantitatively evaluate different methods, we select common metrics in talking head generation. We used M-LMD and F-LMD to measure the accuracy of lip movements and facial contours. In addition, we use Structural Similarity Index Measure (SSIM) <cit.> and Peak Signal to Noise Ratio (PSNR) <cit.> to measure the quality of the generated talking head video frames.
3) Compared Methods. To the best of our knowledge, there are now open-source works that consider emotional information, such as EVP <cit.> and EAMM <cit.>. However, EAMM is speaker-independent when partitioning the dataset, and our approach is speaker-dependent in the same way as EVP. To be fair, we have compared our work with EVP, and our baseline model is based on ATVG <cit.> and MakeItTalk <cit.>. In addition, we also have compared with Audio2Head <cit.> , which is based on motion fields to generate talking head videos, and improved the realism of videos from the perspective of generating head movements.
§.§ Quantitative Result
“Ours w/o MSEF" represents only add AATU model, and “Ours w/o AATU" represents only add MSEF model. As can be seen from Table <ref>, when both MSEF and AATU are added, our model shows improvements in both emotion representation and image quality. Compared to EVP, our results show an increase of 0.66 in F-LMD and 3.79 in PSNR. The module proposed by EVP is helpful for lip accuracy. However, the module relies on long video drives of neutral emotions of the target person, which requires filtering audio pairs of the same content and different emotions, and is slightly weaker in terms of the intensity of emotional expression.
Because Audio2Head is not a landmark-based method, we have only compared the latter two metrics with it. Again, our method outperforms Baseline and Audio2Head in all metrics. The baseline lacks emotional information as an auxiliary secondary feature and is slightly less effective in emotional face fitting. Audio2Head generates pixel-level talking head video frames based on motion fields, losing some important information about the speaker and resulting in limited image quality generated by their method.
§.§ Qualitative Result
In order to visualize our comparison results, we also select some talking head video frames. As shown in Figure <ref>, our method generates high realistic talking head videos with strong emotions. Specifically, the yellow box locations are deficient in emotional expressiveness, the green box locations produce subtle artifacts, the red box locations have poor lip synchronization, and the blue box locations have poor identity consistency effects.
To further see the contribution of MSEF module to the accuracy of landmarks regression, we visualized the landmarks generated by different methods. It can be seen from Figure <ref> that the landmarks generated after adding the MSEF module are closest to the ground truth. Specifically, the red box locations are inaccurate concerning the lip shapes, and the blue box locations are inaccurate concerning eye shapes and facial contours.
§.§ User Study
In addition, we designed a detailed user study to assess the overall quality of the talking head videos. We used three metrics to measure video quality, i.e., Lip Synchronization (LS), Emotional Expressiveness (EE) and Video-Perceived Quality (VPQ). A total of 30 participants completed our experimental questionnaire and they were asked to rate each video in the questionnaire from 1 (worst) to 10 (best). As Table <ref> shows, although our lip sync is slightly worse than EVP, it is superior in terms of emotional expressiveness and video perception quality. This is because, EVP mainly focuses on lip synchronization, and our method works on whole facial modeling. Moreover, our method outperforms Baseline and Audio2Head on all metrics.
§ CONCLUSION
In this work, we propose a novel emotional talking head generation model, which consisted of a memory-sharing emotional feature extractor and an attention-augmented translator based on U-net. MSEF module is proposed to better predict the face landmarks in the talking head video. AATU module is proposed to better fit the facial details in the frames and improve the talking head video perception quality. Extensive experiments have proved that our method can generate lip-sync and emotional talking head videos. In the future, we will consider adding personalized head movements to the videos to further enhance the realism.
IEEEtran
|
http://arxiv.org/abs/2306.02135v1
|
20230603151902
|
The conflict between self-interaction and updating passivity in the evolution of cooperation
|
[
"Chaoqian Wang",
"Wenqiang Zhu",
"Attila Szolnoki"
] |
cond-mat.stat-mech
|
[
"cond-mat.stat-mech",
"cs.GT",
"nlin.CG",
"physics.soc-ph"
] |
1
.001
Chaoqian Wang et al.
mode = title]The conflict between self-interaction and updating passivity in the evolution of cooperation
1]Chaoqian Wang
[email protected]
Conceptualization; Methodology; Writing
2,3]Wenqiang Zhu
Methodology; Validation
4]Attila Szolnoki
[1]
[cor1]Corresponding author
[email protected]
Conceptualization; Validation; Writing
[1]Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA
[2]School of Mathematical Science, Dalian University of Technology, Dalian 116024, China
[3]Institute of Artificial Intelligence, Beihang University, Beijing 100191, China
[4]Institute of Technical Physics and Materials Science, Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
In social dilemmas under weak selection, the capacity for a player to exhibit updating passivity or interact with its own strategy can lead to conflicting outcomes. The central question is which effect is stronger and how their simultaneous presence influences the evolution of cooperation. We introduce a model that considers both effects using different weight factors. We derive theoretical solutions for the conditions of cooperation success and the cooperation level under weak selection, scanning the complete parameter space. When the weight factors are equally strong, the promoting effect of self-interaction to cooperation surpasses the inhibitory effect of updating passivity. Intriguingly, however, we identify non-monotonous cooperation-supporting effects when the weight of updating passivity increases more rapidly. Our findings are corroborated by Monte Carlo simulations and demonstrate robustness across various game types, including the prisoner's dilemma, stag-hunt, and snowdrift games.
* Examining cooperation levels under weak selection
* Analyzing conflict between self-interaction and updating passivity
* Equally strong self-interaction outweighs updating passivity
* Rapid updating passivity growth reveals non-monotonous cooperation threshold
* Conclusions consistent across various game types
Social dilemma Weak selection Self-interaction Updating passivity Evolutionary game theory
[
[
July 31, 2023
=================
§ INTRODUCTION
The study of interactions and microscopic updating dynamics has been crucial for determining the conditions governing the evolutionary outcomes of competing strategies in social dilemmas <cit.>. Over the past two decades, extensive research has been conducted on this topic, leading to the establishment of some generally valid conclusions <cit.>. Specifically, fixed and stable interactions with partners—distinguishing well-mixed from structured populations—enhance direct reciprocity among neighbors <cit.>. Consequently, the term “network reciprocity” was proposed to emphasize its vital role in supporting cooperation mechanisms <cit.>. Intriguingly, variations in an individual's state over time can also yield significant consequences. The vast range of updating rules raises further questions concerning the resilience of cooperation against defection <cit.>. One might assume that the motivation for an individual to change their state (strategy) depends on the payoff values obtained by their current strategy and the alternatives offered by competitors. However, this hypothesis is not universally applicable, as other factors can also contribute to determining a strategy's fitness <cit.>. In such cases, the payoff has a marginal effect on reproductive success, leading to the establishment of the weak selection limit <cit.>. This scenario allows for analytically feasible solutions even in structured populations, making it a popular research direction in recent years <cit.>.
In line with this latter assumption, previous research has demonstrated that a certain level of inertia in strategy updates, wherein a player is unwilling to alter their current strategy despite contradicting payoff values, can be detrimental and hinder cooperation <cit.>. A similar effect can be achieved by imposing a weight factor dictating the willingness of a strategy change <cit.>. In contrast, extending the interaction range of the focal player to include not only nearest neighbors but also their own strategy as an opponent can produce opposite effects <cit.>. This self-interaction can be particularly justified in biologically inspired ecological systems, where an actor's offspring are in close proximity to the parent. Evidently, this extension benefits cooperators, as cooperator-cooperator interactions yield higher incomes than defector-defector bonds, regardless of the social dilemma's nature. This raises the question of which effect has a more dominant influence on the evolution of cooperation: the negative consequence of updating passivity or the positive effect of self-interaction?
To address this question, we consider a structured population with players distributed on a vertex-transitive graph, where players cannot distinguish their positions by observing the structure of the graph. We introduce two key control parameters that determine the strength of self-interaction and the extent of strategy updating passivity. As technical terms, we may refer to these as the weight of self-gaming and self-learning, respectively, whereby the aforementioned effects can be described as self-loops on interaction and learning graphs <cit.>. Our primary objective is to provide analytical results for the critical benefit-to-cost value within the parameter space of these weight factors. In addition to theoretical calculations, which incorporate the identity-by-descent method (IBD) <cit.> and pair approximation <cit.>, we also offer numerical simulations to present an overview of system behavior. To assess the robustness of our observations, we investigate all major social dilemmas based on pair interactions of agents, including the prisoner's dilemma, snowdrift, and stag-hunt games <cit.>.
§ MODEL
We capture the essence of a structured population by considering an L× L square lattice with periodic boundary conditions, hosting N=L^2 agents. According to this topology, each agent interacts with k neighbors, which could be von Neumann (k=4) or Moore (k=8) neighborhood. As a critical extension, we assume that a player interacts with their own strategy with weight w_I, while the interactions with k neighbors are considered with weight 1-w_I. Similarly, the strategy updating protocol is divided: a player considers their own fitness with weight w_R (referred to as self-learning), while the fitness of neighbors is considered with weight 1-w_R.
Based on this hypothesis, we can define the joint transitive interaction and learning graph. The vertex set is denoted by V, containing all agents. On the interaction graph, the edge between agent i and j is denoted by e_ij^[I]. According to our assumption, we set e_ii^[I]=w_I, and e_ij^[I]=(1-w_I)/k if j is one of the k nearest neighbors of player i. For other j, e_ij^[I]=0. In this way, ∑_l∈ Ve_il^[I]=1 is normalized. The same protocol applies to the learning graph, where the edge between i and j is denoted by e_ij^[R]. Similarly, e_ii^[R]=w_R marks the self-loop and e_ij^[R]=(1-w_R)/k connects to the k neighbors. For all other players j, e_ij^[R]=0. Moreover, both graphs are indirected, hence e_ij^[I]=e_ji^[I] and e_ij^[R]=e_ji^[R] for each ij pair. The interaction and learning graphs overlap, with the only difference being the actual values of weight factors w_I and w_R characterizing the self-loops on the graphs.
During an elementary Monte Carlo (MC) step, a random agent i is selected to update the strategy. The strategy of agent i is denoted by s_i=1 for cooperation or s_i=0 for defection. In the two-player donation game, cooperation means donating c and the other player receiving b where b>c>0. A defector player refuses investment but enjoys the benefit of a cooperator partner. Agent i's payoff π_i is the average payoff over all games played through the interaction graph,
π_i=∑_l∈ Ve_il^[I](-cs_i+bs_l)=-cs_i+b∑_l∈ Ve_il^[I]s_l .
After calculating the payoff, we transform it into fitness with form F_i=exp(δπ_i) <cit.>. The parameter δ>0 represents the strength of selection, and we assume weak selection strength δ→ 0^+ in this work. To define the microscopic dynamics, agent i updates its strategy via the classic death-birth rule through the learning graph. Accordingly, the probability that agent i adopts the strategy of agent j is
W(s_i s_j)=e_ij^[R]F_j/∑_l∈ Ve_il^[R]F_l .
This form highlights that the selection process to adopt strategy s_j is proportional to the weighted fitness, where the weight factor is the edge value e_ij^[R] of the learning graph. To execute a full MC step, the above-described elementary step is repeated N times. In this way, every agent has a chance to update their strategy once on average.
§ THEORETICAL ANALYSIS
We assess the system's state by measuring the cooperation level, expressed as the proportion of cooperators in the system. Let the initial cooperation level be p_C(t_0)=N_C/N, where N_C denotes the number of cooperators at the initial time t=t_0. Cooperation ultimately dominates the system with probability p_C(t_0) under neutral drift (i.e., δ=0, which reduces dynamics to the voter model) <cit.>. Therefore, under weak selection (δ→ 0^+), evolution favors cooperation if ρ_C>p_C(t_0), where ρ_C represents the expected final cooperation level over numerous runs at a large t. For instance, if the system starts with a single cooperator and N-1 defectors, evolution favors cooperation when the final cooperation level ρ_C>1/N. Similarly, when the initial cooperation level is p_C(t_0)≈ 0.5 in a random state, cooperation is favored when the expected final cooperation level ρ_C>1/2.
§.§ The condition for cooperation success
For simplicity in analysis, we consider a single initial cooperator, denoted by 1. Following <cit.>, evolution favors cooperation if the condition in Eq. (<ref>) is met:
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0 .
Here, ⟨·⟩_[ δ=0; s_1=1 ] signifies the expectation under neutral drift when agent 1 cooperates. The probabilities of agent 1 reproducing or replacing its strategy are denoted by ℬ_1 and 𝒟_1, respectively.
Considering the standard death-birth updating process described in Eq. (<ref>), agent 1 reproduces its strategy to another agent i with probability ℬ_1 when agent i is the focal agent and learns agent 1's strategy through W(s_i s_1). Conversely, agent 1's strategy is replaced with probability 𝒟_1 when agent 1 is the focal agent and learns the strategy of another agent j through W(s_1 s_j). Thus, ℬ_1 and 𝒟_1 are defined as follows:
ℬ_1 =∑_i∈ V1/N W(s_i s_1)
=∑_i∈ V1/Ne_i1^[R]exp(δπ_1)/∑_l∈ Ve_il^[R]exp(δπ_l) ,
𝒟_1 =1/N∑_j∈ VW(s_1 s_j)
=1/N∑_j∈ Ve_1j^[R]exp(δπ_j)/∑_l∈ Ve_1l^[R]exp(δπ_l) .
By substituting Eqs. (<ref>) into Eq. (<ref>), we compute:
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0
⇔ ⟨π_1⟩_[ δ=0; s_1=1 ]-⟨∑_j,l∈ Ve_1j^[R]e_jl^[R]π_l⟩_[ δ=0; s_1=1 ]>0
⇔ π^(0,0)-π^(0,2)>0 .
Eq. (<ref>) employs random walk notation. Specifically, an (n,m)-random walk involves n steps on the interaction graph and m steps on the learning graph. The expected value of a variable at the end of an (n,m)-random walk is denoted by x^(n,m), where x may represent s, π, or F. Since the walk occurs through edges, we obtain the following expression for the initial cooperator 1:
π^(0,0)=⟨π_1⟩_[ δ=0; s_1=1 ], π^(0,2)=⟨∑_j,l∈ Ve_1j^[R]e_jl^[R]π_l⟩_[ δ=0; s_1=1 ] ,
which completes the final calculation step in Eq. (<ref>).
The payoff calculation in Eq. (<ref>) can also be straightforwardly rewritten using random walk terminology,
π^(n,m)=-cs^(n,m)+bs^(n+1,m) .
According to <cit.>, the following equation holds since we do not consider mutation:
s^(n,m)-s^(n,m+1)=μ/2(Np^(n,m)-1)+𝒪(μ^2) ,
where μ is an auxiliary parameter, which will be eliminated later, and 𝒪(μ^2)→ 0. Here, p^(n,m) denotes the probability of ending at the starting vertex after the (n,m)-random walk. We will discuss the proper calculation of p^(n,m) later.
Using Eq. (<ref>), we construct the following equation:
s^(n,m)-s^(n,m+2)
= (s^(n,m)-s^(n,m+1))+(s^(n,m+1)-s^(n,m+2))
= μ/2(Np^(n,m)+Np^(n,m+1)-2)+𝒪(μ^2) .
Next, we calculate the cooperation success condition given by Eq. (<ref>),
π^(0,0)-π^(0,2)>0
⇔ (-cs^(0,0)+bs^(1,0))-(-cs^(0,2)+bs^(1,2))>0
⇔ -c(s^(0,0)-s^(0,2))+b(s^(1,0)-s^(1,2))>0
⇔ -c(Np^(0,0)+Np^(0,1)-2)
+b(Np^(1,0)+Np^(1,1)-2)>0 ,
which uses Eq. (<ref>) to replace π^(n,m) with s^(n,m) first, and then employs Eq. (<ref>) to replace s^(n,m) with p^(n,m).
The actual forms of p^(n,m) values highlight the difference between our model and the classic case where self-loops are excluded. Without walking, one stays in the starting vertex, hence p^(0,0)=1. One cannot leave and return to the starting vertex within one step in the classic case; however, in our model, one can walk to itself because self-loop is allowed with weight w_R or w_I on the learning and interaction graphs. Therefore, p^(0,1)=w_R and p^(1,0)=w_I. Finally, there are two cases for p^(1,1): one walks to itself in both steps, thus staying in the original place, with probability w_Iw_R, or, one walks to an arbitrary neighbor in the first step and goes back to the exact starting vertex in the second step, with probability (1-w_I)(1-w_R)/k. Therefore, p^(1,1)=w_Iw_R+(1-w_I)(1-w_R)/k.
Substituting the values of p^(0,0), p^(0,1), p^(1,0), and p^(1,1) into Eq. (<ref>), we can finalize the calculation of cooperation success condition:
π^(0,0)-π^(0,2)>0
⇔ -c(N+Nw_R-2)
+b[Nw_I+N(w_Iw_R+(1-w_I)(1-w_R)/k)-2]>0
⇔ b/c>N-2+N w_R/N(k-1)w_I+N(k+1)w_I w_R+N-2k-Nw_Rk
≡(b/c)^⋆ .
Accordingly, when b/c>(b/c)^⋆, cooperation is favored. The critical value (b/c)^⋆ depends only on the population size N, the degree k of vertices, and the weight factors w_I, w_R.
Furthermore, by combining our expression for -c(Np^(0,0)+Np^(0,1)-2)+b(Np^(1,0)+Np^(1,1)-2) in Eq. (<ref>) with the results obtained in Refs. <cit.> for the death-birth updating rules, we derive the theoretical cooperation level:
ρ_C= N_C/N+N_C(N-N_C)/2N(N-1){-c(N+Nw_R-2)
+b[Nw_I+N(w_Iw_R+(1-w_I)(1-w_R)/k)-2]}δ .
This term is a linear function of the benefit b and cost c, with six other parameters: the selection strength δ, the initial number of cooperators N_C, population N, degree k, self-weight for interaction w_I, and for updating w_R. Since Eq. (<ref>) is linear, we should set ρ_C=0 if ρ_C<0 and ρ_C=1 if ρ_C>1 for self-consistency. It is important to note, however, that while the theoretical cooperation level predicted by Eq. (<ref>) may approximate the results of MC simulations in a wide range, it is only strictly accurate as δ→ 0^+ and (b/c)→ (b/c)^⋆.
§.§ The conflict between self-interaction and updating passivity
Table <ref> summarizes the main results concerning the threshold (b/c)^⋆ for cooperation success, including the reduced form of (b/c)^⋆ under specific parameters (w_R=0, w_I=0, w_R=w_I≡ w) and the large population limit (N→ +∞).
From these results, we can make several observations. On the one hand, when w_R=0, the dependence of (b/c)^⋆ on w_I highlights the direct impact of “self-gaming”: an increase in weight factor w_I consistently reduces (b/c)^⋆, thereby fostering cooperation. Fig. <ref>(a) demonstrates several representative cases for this function, and the effect of self-interaction is robust across different population sizes. Furthermore, in the w_I → 1 limit, the cooperation success condition becomes (b/c)^⋆>1 or b>c, resulting in a consistent preference for cooperation. On the other hand, when w_I=0, the dependence of (b/c)^⋆ on w_R echoes the system behavior previously reported in Ref. <cit.>: an increase in strategy updating inertia consistently raises (b/c)^⋆, thus hindering cooperation. This effect is depicted in Fig. <ref>(b), where (b/c)^⋆ is plotted as a function of w_R at w_I=0 for various system sizes. A finite system size exhibits a unique behavior: beyond a certain w_R threshold, (b/c)^⋆<0 and cooperation success requires an unattainable b/c<(b/c)^⋆ condition.
These system behaviors elucidate the conflict between self-interaction and updating passivity. Our primary aim is to determine whether the positive influence of self-interaction or the negative effect of updating passivity prevails. To address this question, we first set w_R=w_I≡ w and analyze the impact of w, which intuitively signifies an equal weight of self-loops on both interaction and learning graphs. According to Table <ref>, an increase in w promotes cooperation. However, the overall system behavior is more intricate, as demonstrated when calculating the critical (b/c)^⋆ value across the complete w_R-w_I parameter plane. Fig. <ref> displays the full landscape of the critical benefit-to-cost ratio for small, practically large, and infinite system sizes. As a technical note, we have omitted details when (b/c)^⋆>10 and (b/c)^⋆<0 to ensure visibility. These ranges represent the parameter areas where achieving cooperation success is extremely challenging or impossible.
As w_I increases along the horizontal axis, the threshold value consistently declines, while the increase of w_R along the vertical axis results in a larger (b/c)^⋆. It is important to note that the origin is located at the upper-left corner of the parameter plane.
Though the effect of w_I outweighs w_R and moving along the w_R=w_I line favors cooperation [see Fig. <ref>(a)], w_R causes a more dramatic increase in (b/c)^⋆. This increase is more pronounced for larger w_R values compared to w_I, leading to a “high hill” in the region where w_R is substantial and w_I is small. This feature can result in some non-monotonic effects on the cooperation success threshold when traversing the surface. For instance, by adopting a simple linear relationship, w_R=3w_I, we can observe that an increase in w_I initially hinders cooperation before promoting it, as illustrated in Fig. <ref>(b). As shown in Fig. <ref>(b), numerous trajectories of w_R and w_I on the w_R-w_I plane can be detected where (b/c)^⋆ increases and subsequently decreases with w_I (first ascending and then descending the hill of large w_R values). For simplicity, we will only present the relevant phenomenon using the straightforward linear constraint w_R=3w_I for the remainder of this work.
Additionally, we can analytically determine the expression of the contour line of the “hill” at an arbitrary height (b/c)^⋆. On the w_R-w_I plane, we can identify a curve representing the same value of (b/c)^⋆ by solving the expression of (b/c)^⋆ in Table <ref> and obtaining w_R as a function of w_I, denoted by w_R^*,
w_R^*=N(k-1)(b/c)^⋆ w_I+(N-2k)(b/c)^⋆-(N-2)k/Nk+N(b/c)^⋆ -N(k+1)(b/c)^⋆ w_I .
In particular, the contour line maintaining the classic (b/c)^⋆ value (by setting w_R=0 and w_I=0) can be obtained by substituting this expression of (b/c)^⋆ into Eq. (<ref>). In this case, we have
w_R^*=(N-2)(k-1)w_I/2(N-k-1)-(N-2)(k+1)w_I ,
which represents the case we discuss for the remainder of this work. As shown in Fig. <ref>(c), along the constraint depicted by Eq. (<ref>), (b/c)^⋆ remains invariant.
§ NUMERICAL SIMULATION
In this section, we verify our theoretical conclusions through Monte Carlo simulations. Initially, we assign each agent a random strategy of cooperation or defection, resulting in an approximate initial number of cooperators N_C≈ N/2 and an initial cooperation level of p_C(t_0)≈ 1/2. As discussed at the beginning of Section <ref>, evolution favors cooperation if ρ_C>1/2. We record the final cooperation level (p_C=0 or p_C=1) in the last MC step for each run. If the system does not reach fixation before the maximum step of t=400000 <cit.>, we record the actual cooperation level (0<p_C<1) in the last MC step. The expected cooperation level ρ_C under the given parameter values is obtained by averaging the outcomes of many independent runs. We investigate three representative population sizes: 5×5, 20×20, and 100×100 square lattices, where N=25, 400, and 10,000, respectively. Based on our empirical exploration of the system's relaxation, we set δ=0.01 for the 5×5 and 20×20 lattices, averaging the outcomes of 10^6 and 10^4 runs, respectively, while for the 100×100 lattice, we set δ=0.1 and record the outcome of a single run.
Fig. <ref> illustrates the results of Monte Carlo simulations for the donation game (DG) for the three trajectories discussed previously, with the horizontal axis representing a fixed value of c=1 while b varies. In the first row, where L=5, the theoretical cooperation level is ρ_C=1/2-23/768c+17/3072b when w_I=0 and the threshold for cooperation success is (b/c)^⋆≈ 5.4118. If w_I=0.1, we have ρ_C=1/2-17/512c+31/4096b and (b/c)^⋆≈ 4.3871. Finally, when w_I=0.2, ρ_C=1/2-7/192c+1/96b and (b/c)^⋆=3.5000. In the second panel, where L=20, ρ_C=1/2-199/399c+7/57b and (b/c)^⋆≈ 4.0612 for w_I=0. If w_I=0.1, we have ρ_C=1/2-73/133c+41/266b and (b/c)^⋆≈ 3.5610. Lastly, at w_I=0.2, we have ρ_C=1/2-239/399c+79/399b and (b/c)^⋆≈ 3.0253. The third panel displays the results for the lattice with L=100. With w_I=0, we obtain ρ_C=1/2-1249750/9999c+312250/9999b and (b/c)^⋆≈ 4.0024. For w_I=0.1, the results yield ρ_C=1/2-152750/1111c+43375/1111b and (b/c)^⋆≈ 3.5216. Finally, at w_I=0.2, we have ρ_C=1/2-1499750/9999c+499750/9999b and (b/c)^⋆≈ 3.0010. The comparison of different (b/c)^⋆ values confirms that the simultaneous increase of weight factors promotes cooperation.
The second row of Fig. <ref> illustrates the scenario where we follow the w_R=3w_I trajectory. For L=5, when w_I=0, we obtain ρ_C=1/2-23/768c+17/3072b and (b/c)^⋆≈ 5.4118. With w_I=0.1, we have ρ_C=1/2-61/1536c+83/12288b and (b/c)^⋆≈ 5.8795. Lastly, for w_I=0.2, we find ρ_C=1/2-19/384c+1/96b and (b/c)^⋆=4.7500. When L=20 and w_I=0, ρ_C=1/2-199/399c+7/57b and (b/c)^⋆≈ 4.0612. Here, w_I=0.1 results in ρ_C=1/2-37/57c+113/798b and (b/c)^⋆≈ 4.5841. Finally, at w_I=0.2, we have ρ_C=1/2-319/399c+79/399b and (b/c)^⋆≈ 4.0380. For L=100 and w_I=0, we obtain ρ_C=1/2-1249750/9999c+312250/9999b and (b/c)^⋆≈ 4.0024. At w_I=0.1, ρ_C=1/2-1624750/9999c+359125/9999b and (b/c)^⋆≈ 4.5242. If w_I=0.2, we derive ρ_C=1/2-1999750/9999c+499750/9999b and (b/c)^⋆≈ 4.0015. In this case, when the self-learning weight factor increases more rapidly than the self-interaction weight, we observe a non-monotonous shift in the threshold values. Initially, the increase in weight factors inhibits cooperation, but later encourages it.
The third row of Fig. <ref> presents the case when w_R=w_R^* is maintained according to Eq. (<ref>). For L=5 and w_I=0, we obtain ρ_C=1/2-23/768c+17/3072b and (b/c)^⋆≈ 5.4118. At w_I=0.1, we have ρ_C=1/2-23/608c+17/2432b and (b/c)^⋆≈ 5.4118. For w_I=0.2, we find ρ_C=1/2-23/408c+1/96b and (b/c)^⋆≈ 5.4118. If L=20 and w_I=0, we have ρ_C=1/2-199/399c+7/57b and (b/c)^⋆≈ 4.0612. At w_I=0.1, we find ρ_C=1/2-15721/26201c+553/3743b and (b/c)^⋆≈ 4.0612. Lastly, at w_I=0.2, we have ρ_C=1/2-15721/19551c+79/399b and (b/c)^⋆≈ 4.0612. For L=100 and w_I=0, we obtain ρ_C=1/2-1249750/9999c+312250/9999b and (b/c)^⋆≈ 4.0024. At w_I=0.1, we find ρ_C=1/2-412275645232534375/2748504191533056c+824056460724841625/21988033532264448b and (b/c)^⋆≈ 4.0024. Finally, at w_I=0.2, we obtain ρ_C=1/2-12495702011469625/62466004353024c+499750/9999b and (b/c)^⋆≈ 4.0024. In conclusion, since (b/c)^⋆ values remain invariant, the increase of weight factors does not influence the threshold level for cooperation success. However, it is evident that the slope of the theoretical cooperation level ρ_C increases with a larger weight factor.
§ EXTENSION TO ALTERNATIVE GAMES
To examine the robustness of our findings, we can extend the model to different types of games. In an arbitrary two-player game, the strategy of agent i can be denoted by a vector 𝐬_i=(s_i,1-s_i)^T, where 𝐬_i=(1,0)^T represents cooperation and 𝐬_i=(0,1)^T denotes defection. Similar to Eq. (<ref>), the payoff that agent i receives through the interaction graph can be calculated using:
π_i=∑_l∈ V(e_il^[I]𝐬_i^T·𝐌·𝐬_l) ,
where 𝐌 is a 2× 2 payoff matrix,
𝐌=[ ℝ 𝕊; 𝕋 ℙ ] .
In conventional notation, ℝ denotes the reward for mutual cooperation, 𝕋 signifies the temptation to defect, 𝕊 represents the sucker's payoff, and ℙ indicates the punishment for defection.
The arbitrary game in Eq. (<ref>) reduces to the donation game in Eq. (<ref>) when 𝐌 takes the following form:
𝐌=[ b-c -c; b 0 ] .
According to the structure coefficient theorem <cit.>, the condition for the success of cooperation in a general two-player game can be expressed as:
σℝ+𝕊>𝕋+σℙ ,
where σ is the structure coefficient independent of the payoff matrix. We can determine the σ value here by the results of the donation game. By applying the payoff matrix in Eq. (<ref>) to Eq. (<ref>) and comparing it with the cooperation success condition of the donation game using Eq. (<ref>), we obtain:
σ=(b/c)^⋆+1/(b/c)^⋆-1 ,
where (b/c)^⋆ is provided by Eq. (<ref>).
Furthermore, we can derive the theoretical cooperation level ρ_C from the structure coefficient theorem and the expression of ρ_C for the donation game. The core idea is to construct the donation game and arbitrary games in the same form using the structure coefficient theorem in Eq. (<ref>), without multiplying or dividing by any quantity in the process. Then, substitute the part of the arbitrary games in the structure coefficient theorem that is equal to the donation game back into Eq. (<ref>).
By applying σ=[(b/c)^⋆+1]/[(b/c)^⋆-1] to σ (b-c)-c>b⇔ -(σ+1)c+(σ-1)b>0, we find that -(σ+1)c+(σ-1)b is multiplied by 2k/{(N-2+Nw_R)k-[N(k-1) w_I+N(k+1) w_I w_R+N-2k-Nw_R ]} based on -c(Np^(0,0)+Np^(0,1)-2)+b(Np^(1,0)+Np^(1,1)-2). The same part of arbitrary games σℝ+𝕊>𝕋+σℙ⇔σ(ℝ-ℙ)+(𝕊-𝕋)>0 is σ(ℝ-ℙ)+(𝕊-𝕋), and we divide it by 2k/{(N-2+Nw_R)k-[N(k-1) w_I+N(k+1) w_I w_R+N-2k-Nw_R ]}, substituting it back to the same position in Eq. (<ref>). In this manner, we obtain the theoretical cooperation level ρ_C for arbitrary two-player games as follows:
ρ_C= N_C/N+N_C(N-N_C)/4N(N-1){ (N-2+Nw_R)
× (ℝ+𝕊-𝕋-ℙ)
+N(k-1) w_I+N(k+1) w_I w_R+N-2k-Nw_R/k
× (ℝ-𝕊+𝕋-ℙ)}δ ,
which is a function of ℝ, 𝕊, 𝕋, and ℙ, with six other parameters as mentioned below Eq. (<ref>).
It is essential to clarify that our approach for deducing ρ_C for arbitrary two-player games above is non-rigorous, although it may predict Monte Carlo simulations, as we will see later.
In the following, we apply the general results to three representative social games, which include the prisoner's dilemma game, the stag-hunt game, and the snowdrift game.
§.§ The prisoner's dilemma game
For simplicity, we consider the so-called weak prisoner's dilemma game (PD) <cit.>, where the temptation is the only control parameter. The payoff matrix is:
𝐌=[ 1 0; b_PD 0 ] .
According to Eq. (<ref>), the threshold of cooperation success b_PD^⋆ is
b_PD^⋆ =σ
=(k+1)-4k/N+(k-1) w_I+(k+1) w_I w_R+(k-1) w_R/(k-1)-(k-1) w_I-(k+1) w_I w_R+(k+1) w_R
and evolution favors cooperation if b_PD<b_PD^⋆. Moreover, we have the theoretical cooperation level ρ_C by substituting Eq. (<ref>) into Eq. (<ref>):
ρ_C= N_C/N+N_C(N-N_C)/4N(N-1){ (N-2+Nw_R)
× (1-b_PD)
+N(k-1) w_I+N(k+1) w_I w_R+N-2k-Nw_R/k
× (1+b_PD)}δ ,
which is a function of b_PD, with six other parameters.
Fig. <ref> shows the cooperation level ρ_C as a function of temptation when w_R=3w_I, which provides the most illustrative example of the conflict between self-interaction and updating passivity. We can see that ρ_C obtained by Monte Carlo simulations agrees with the prediction of theoretical ρ_C calculated by Eq. (<ref>). Meanwhile, from w_I=0 to w_I=0.1 and w_I=0.2, the threshold b_PD^⋆ for cooperation success first decreases and then increases, which means the increase of weight factors first inhibits and later promotes cooperation.
§.§ The stag-hunt game
The stag-hunt game (SH), as delineated by previous studies <cit.>, employs a single-parameter payoff matrix of r_SH, presented below.
𝐌=[ 1 -r_SH; r_SH 0 ] .
Drawing from Eq. (<ref>), the cooperation success threshold r_SH^⋆ is given by
r_SH^⋆ =σ/2
=(k+1)-4k/N+(k-1) w_I+(k+1) w_I w_R+(k-1) w_R/2[(k-1)-(k-1) w_I-(k+1) w_I w_R+(k+1) w_R] .
Evolution favors cooperation when r<r_SH. Furthermore, the theoretical cooperation level, ρ_C, can be expressed as
ρ_C= N_C/N+N_C(N-N_C)/4N(N-1){ (N-2+Nw_R)
× (1-2r_SH)
+N(k-1) w_I+N(k+1) w_I w_R+N-2k-Nw_R/k
× (1+2r_SH)}δ ,
This formulation of ρ_C is a function of r_SH, incorporating six other parameters.
In a similar vein, Fig. <ref> illustrates the cooperation level ρ_C as a function of r_SH, given w_R=3w_I. The numerical ρ_C derived from Monte Carlo simulations aligns with the theoretical prediction computed via Eq. (<ref>). As before, the non-monotonic variation in the threshold value is observable when altering w_I from 0 to w_I=0.1 and w_I=0.2.
§.§ The snowdrift game
The snowdrift game (SD) <cit.> also features a single-parameter payoff matrix of r_SD:
𝐌=[ 1 1-r_SD; 1+r_SD 0 ] .
In accordance with Eq. (<ref>), the cooperation success threshold, r_SD^⋆, is
r_SD^⋆ =σ/2
=(k+1)-4k/N+(k-1) w_I+(k+1) w_I w_R+(k-1) w_R/2[(k-1)-(k-1) w_I-(k+1) w_I w_R+(k+1) w_R] .
Cooperation is favored by evolution when r<r_SD^⋆. The distinction in 𝕊 and 𝕋 between stag-hunt and snowdrift games vanishes within the structure coefficient theorem, Eq. (<ref>), equating the critical cooperation success thresholds and theoretical cooperation levels,
ρ_C= N_C/N+N_C(N-N_C)/4N(N-1){ (N-2+Nw_R)
× (1-2r_SD)
+N(k-1) w_I+N(k+1) w_I w_R+N-2k-Nw_R/k
× (1+2r_SD)}δ ,
Fig. <ref> displays the cooperation level ρ_C as a function of r_SD, with w_R=3w_I. The cooperation success threshold r_SD^⋆ initially diminishes and subsequently escalates, mirroring the findings in the stag-hunt game context.
Comparing the prisoner's dilemma, the stag-hunt, and the snowdrift games, the most striking effect of w_I can be detected in the prisoner's dilemma.
§ CONCLUSION
Prosocial behavior, such as donating a minor cost to provide a significant benefit to another individual, may appear contradictory to human beings' inherently selfish nature. Intuitively, if self-loop interactions are considered, where individuals can donate a small cost to themselves and directly receive the large benefit, cooperation would emerge without dependence on external conditions, such as network structure and updating rules. Meanwhile, previous research has demonstrated that increasing the self-loop in the updating process hinders cooperation <cit.>. This poses a potential paradox: does the self-loop in evolutionary game dynamics result in the positive effect of self-interaction or the negative effect of updating passivity?
The answer is nuanced, as the final effect may rely on the contributions of the aforementioned factors. Consequently, in this study, we introduced self-weights for playing games and updating strategies to characterize self-interaction and updating passivity. We analyzed the basic social dilemma, the donation game, on a square lattice and derived theoretical solutions for the cooperation success condition and cooperation level under weak selection. Our initial findings confirm that self-interaction consistently fosters cooperation, whereas updating passivity persistently inhibits cooperation. Building upon this, we discovered that the positive effect of self-interaction on cooperation surpasses the inhibitory influence of updating passivity, indicating that an equal increase in self-loop within evolutionary game dynamics indeed promotes cooperation.
Nevertheless, updating passivity can exert a substantial inhibitory effect on cooperation, although such severe inhibition occurs when updating passivity is large and self-interaction is small. We can derive constant cooperation success threshold contours on the w_R-w_I plane. This suggests that even along a simple trajectory, for example, w_R=3w_I, self-loops may have a non-monotonic impact on cooperation. We observed that under the aforementioned constraint (i.e., when updating passivity is triple the self-interaction), cooperation is initially impeded but subsequently facilitated as self-interaction increases.
Moreover, we generalized our findings to encompass diverse games, examining three classic examples: the prisoner's dilemma, stag-hunt, and snowdrift. Our conclusions remain consistent across these different game scenarios. Future research could extend these conclusions to arbitrary network structures <cit.>, further broadening the understanding of cooperation dynamics on general social structures.
§ ACKNOWLEDGEMENT
A.S. was supported by the National Research, Development and Innovation Office (NKFIH) under Grant No. K142948.
§ PAIR APPROXIMATION
To broaden our theoretical study and evaluate the robustness of our findings, we employ the pair approximation method <cit.>. As a type of mean-field approximation, the pair approximation belongs to a distinct technique family compared to the IBD method. Notably, pair approximation investigates dynamics within an infinite population. We will demonstrate that the same result as N→ +∞, obtained using the IBD method, can also be acquired through pair approximation.
We summarize several useful equations derived from binomial theory for swift application in subsequent calculations. Assuming k is a positive integer, k_C is an integer between 0 and k, 0≤ k_C≤ k, and z is a proportional quantity, 0≤ z≤ 1, we have:
∑_k_C=0^kk!/k_C!(k-k_C)!z^k_C(1-z)^k-k_C=1,
∑_k_C=0^kk!/k_C!(k-k_C)!z^k_C(1-z)^k-k_Ck_C=kz,
∑_k_C=0^kk!/k_C!(k-k_C)!z^k_C(1-z)^k-k_Ck_C^2=kz[1+(k-1)z],
∑_k_C=0^kk!/k_C!(k-k_C)!z^k_C(1-z)^k-k_Ck_C(k-k_C)=k(k-1)z(1-z).
§.§ Constructing the system
We denote the proportion of C-players and D-players in the system as p_C and p_D, respectively. The probability of finding a C-player or D-player in the neighborhood of an X-player is denoted by q_C|X and q_D|X, where X represents either C or D. The proportion of edges connecting a pair of X- and Y-players is denoted as p_XY, where X and Y may represent C, D. Due to constraints, their relations are as follows:
p_C+p_D =1,
q_C|X+q_D|X =1,
p_XY =q_X|Y p_Y,
p_CD =p_DC.
In total, we have nine variables: p_C, p_D, q_C|C, q_C|D, q_D|C, q_D|D, p_CC, p_CD, and p_DD. According to Eq. (<ref>), the system can be described using only two independent variables: p_C and q_C|C. The remaining seven variables can be expressed as functions of p_C and q_C|C as follows:
p_CC =q_C|C p_C,
p_D =1-p_C,
q_D|C =1-q_C|C,
p_CD =p_C p_D|C=p_C(1-q_C|C),
q_C|D =p_CD/p_D=p_C(1-q_C|C)/1-p_C,
q_D|D =1-q_C|D=1-2p_C+p_Cq_C|C/1-p_C,
p_DD =p_Dq_D|D=1-2p_C+p_Cq_C|C.
§.§ Updating a D-player
First, we examine the case where the focal agent is a D-player. Let there be k_C cooperators surrounding this D-player. In this scenario, the D-player's payoff is given by:
π_D=1-w_I/kk_C b=(1-w_I)k_C/kb.
The expected payoff for a C-player around this focal D-player is
π_C|D= w_I (-c+b)+1-w_I/k{
-c+∑_k_C'=0^k-1(k-1)!/k_C'!(k-k_C'-1)!q_C|C^k_C' q_D|C^k-k_C'-1[-(k-1)c+k_C' b]}
= -c+w_I b+1-w_I/k(k-1)q_C|Cb,
where the C-player plays the game with itself with weight w_I. Moreover, it pays a cost c to the focal D-player and the remaining k-1 neighbors, receiving b from k_C' cooperators among the k-1 remaining neighbors. The summation is computed using Eqs. (<ref>) and (<ref>).
Likewise, the expected payoff for a D-player surrounding the focal D-player is
π_D|D= 1-w_I/k∑_k_C'=0^k-1(k-1)!/k_C'!(k-k_C'-1)!q_C|D^k_C' q_D|D^k-k_C'-1k_C' b
= 1-w_I/k(k-1)q_C|Db.
We proceed to convert the payoff to fitness F = exp(δπ). According to the death-birth process, the focal D-player becomes a C-player with probability
𝒫(D C)= (1-w_R)/k· k_C F_C|D/w_R F_D+(1-w_R)/k· [k_C F_C|D+(k-k_C)F_D|D]
= (1-w_R)k_C/k+w_R(1-w_R)(π_C|D-π_D)k_C/kδ
+(1-w_R)^2 (π_C|D-π_D|D)k_C (k-k_C)/k^2δ+𝒪(δ^2),
where we conduct a second-order Taylor expansion at δ = 0. We can utilize the first- or second-order terms later based on our requirements.
Summing over all k_C, we derive the probability that the number of C-players in the system increases by 1 as follows:
𝒫(Δ p_C=1/N)= (1-p_C)
∑_k_C=0^kk!/k_C!(k-k_C)!q_C|D^k_C q_D|D^k-k_C𝒫(D C)
= (1-p_C)(1-w_R)q_C|D+(1-p_C)w_R(1-w_R)(
π_C|D-(1-w_I)1+(k-1)q_C|D/kb
)q_C|Dδ
+(1-p_C)(1-w_R)^2 (π_C|D-π_D|D)k-1/kq_C|Dq_D|Dδ +𝒪(δ^2),
which happens when a focal D-player is chosen with probability 1-p_C and adopts the strategy of a neighboring C-player in all possible scenarios involving k_C cooperative neighbors. The summation is calculated using Eqs. (<ref>), (<ref>), and conducting a second-order Taylor expansion, as the δ^0 term will be eliminated later.
Upon the occurrence of the learning event 𝒫(D C), the proportion of CC-edges in the system alters accordingly. Given k_C cooperative neighbors surrounding the focal D-player, k_C edges of CD transform to CC-edges, and the proportion of CC-edges increases by 2k_C/(kN),
𝒫(Δ p_CC=2k_C/kN)=(1-p_C)k!/k_C!(k-k_C)! q_C|D^k_Cq_D|D^k-k_C𝒫(D C) .
Summing over all possible values of k_C yields the expected changes in the proportion of CC-edges:
∑_k_C=0^k2k_C/kN𝒫(Δ p_CC=2k_C/kN)=
(1-p_C)∑_k_C=0^k2k_C/kNk!/k_C!(k-k_C)!q_C|D^k_C q_D|D^k-k_C((1-w_R)k_C/k+𝒪(δ))
= 2(1-p_C)/kN(1-w_R)q_C|D[1+(k-1)q_C|D]+𝒪(δ).
In this case, the summation is computed using Eq. (<ref>), and we perform only a first-order Taylor expansion because, as we will observe later that the δ^0 term will not be eliminated.
§.§ Updating a C-player
Next, we analyze the scenario where the focal agent is a C-player. Similarly, let us assume that there are k_C cooperators surrounding this C-player. In this situation, the payoff for the C-player is given by:
π_C=w_I(-c+b)+1-w_I/k(-kc+k_C b)=-c+w_I b+(1-w_I)k_C/kb.
The expected payoff for a C-player neighboring the focal C-player is
π_C|C= w_I (-c+b)+1-w_I/k{
-c+b+∑_k_C'=0^k-1(k-1)!/k_C'!(k-k_C'-1)!q_C|C^k_C' q_D|C^k-k_C'-1[-(k-1)c+k_C' b]}
= -c+w_I b+1-w_I/k[1+(k-1)q_C|C]b.
In this case, the focal C-player, as well as the remaining k-1 neighbors, leads to a cost c, while the focal C-player and k_C' cooperators among the k-1 remaining neighbors bring a benefit b. The C-player also engages in the game with itself, with weight w_I.
The expected payoff for a D-player in the vicinity of the focal C-player is
π_D|C= 1-w_I/k(b+
∑_k_C'=0^k-1(k-1)!/k_C'!(k-k_C'-1)!q_C|D^k_C' q_D|D^k-k_C'-1k_C' b)
= 1-w_I/k[1+(k-1)q_C|D]b.
Analogous to Eq. (<ref>), the focal C-player transforms into a D-player with probability:
𝒫(C D)= (1-w_R)/k· (k-k_C) F_D|C/w_R F_C+(1-w_R)/k· [k_C F_C|C+(k-k_C)F_D|C]
= (1-w_R)k-k_C/k+w_R(1-w_R)(π_D|C-π_C)k-k_C/kδ
+(1-w_R)^2 (π_D|C-π_C|C)k_C (k-k_C)/k^2δ+𝒪(δ^2),
and the probability of the number of C-players in the system decreasing 1 is
𝒫(Δ p_C=-1/N)= p_C
∑_k_C=0^kk!/k_C!(k-k_C)!q_C|C^k_C q_D|C^k-k_C𝒫(C D)
= p_C(1-w_R)q_D|C+p_Cw_R(1-w_R)[
π_D|C-(-c+w_Ib+(1-w_I)k-1/kq_C|Cb)
]q_D|Cδ
+p_C(1-w_R)^2 (π_D|C-π_C|C)k-1/kq_C|Cq_D|Cδ +𝒪(δ^2),
where a second-order Taylor expansion is performed for the same reason as in Eq. (<ref>).
If the learning event 𝒫(C D) occurs, the proportion of CC-edges in the system decreases by 2k_C/(kN) when k_C cooperative neighbors surround the focal C-player,
𝒫(Δ p_CC=-2k_C/kN)=p_Ck!/k_C!(k-k_C)! q_C|C^k_Cq_D|C^k-k_C𝒫(C D).
Taking into account all possibilities, the expected decrease in the proportion of CC-edges is given by
∑_k_C=0^k(-2k_C/kN)𝒫(Δ p_CC=-2k_C/kN)=
-p_C∑_k_C=0^k2k_C/kNk!/k_C!(k-k_C)!q_C|C^k_C q_D|C^k-k_C((1-w_R)k-k_C/k+𝒪(δ))
= -2p_C/kN(1-w_R)(k-1)q_C|Cq_D|C+𝒪(δ).
§.§ Diffusion approximation
We can now formulate the system dynamics of p_C and p_C|C by employing the previously derived results. Utilizing Eqs. (<ref>) and (<ref>), along with p_CD=p_C q_D|C=(1-p_C) q_C|D, we compute the instantaneous change in p_C as
ṗ_C= 1/N𝒫(Δ p_C=1/N)+(-1/N)𝒫(Δ p_C=-1/N)
= p_CD/Nw_R(1-w_R)[(π_C|D-π_D|C)+(-c+w_Ib
+(1-w_I)-1+(k-1)(q_C|C-q_C|D)/kb)]δ
+p_CD/N(1-w_R)^2 [(π_C|D-π_D|D)q_D|D+(π_C|C-π_D|C)q_C|C]k-1/kδ +𝒪(δ^2).
Observe that the δ^0 terms are eliminated, and only non-zero terms remain from the δ^1 terms. This is why we executed the Taylor expansion to the second-order δ^1 in Eqs. (<ref>) and (<ref>).
Analogously, using Eqs. (<ref>) and (<ref>), we determine the instantaneous change in p_CC as
ṗ_CC= ∑_k_C=0^k2k_C/kN𝒫(Δ p_CC=2k_C/kN)+
∑_k_C=0^k(-2k_C/kN)𝒫(Δ p_CC=-2k_C/kN)
= 2p_CD/kN(1-w_R)[1+(k-1)(q_C|D-q_C|C)]+𝒪(δ).
Here, the δ^0 term is non-zero, which is why we only need to perform the Taylor expansion to the first-order δ^0 in Eqs. (<ref>) and (<ref>).
Based on Eq. (<ref>) and q_C|C=p_CC/p_C, we calculate the instantaneous change in q_C|C:
q̇_C|C= d/dt(p_CC/p_C)
= ṗ_CCp_C-ṗ_C p_CC/p_C^2
= 2/kNp_CD/p_C(1-w_R)[1+(k-1)(q_C|D-q_C|C)]+𝒪(δ),
where we only employed the δ^0 term in both p_CC and p_C, as it is non-zero.
Comparing the governing equations, Eq. (<ref>) and Eq. (<ref>), we observe that the change in q_C|C illustrated by Eq. (<ref>) is substantially faster than the change in p_C portrayed by Eq. (<ref>). This is because the magnitude of q̇_C|C is δ^0, while the magnitude of ṗ_C is only δ^1 in the δ→ 0^+ limit, leading to the emergence of different time scales.
Owing to the distinct time scales, q_C|C relaxes much faster than p_C. In other words, we can first determine the equilibrium of q_C|C and then examine the dynamics of p_C based on this foundation.
To compute the equilibrium of q_C|C, we solve q̇_C|C=0 according to Eq. (<ref>). The solution is q_C|C-q_C|D=1/(k-1). Combining Eq. (<ref>), we can represent the remaining variables of the system by only p_C:
q_C|C =k-2/k-1p_C+1/k-1,
q_D|C =k-2/k-1(1-p_C),
p_CD =k-2/k-1p_C(1-p_C),
q_C|D =k-2/k-1p_C,
q_D|D =1-k-2/k-1p_C.
According to Eq. (<ref>), we still need to calculate π_C|D-π_D|C, (π_C|D-π_D|D)q_D|D, and (π_C|C-π_D|C)q_C|C. Applying Eq. (<ref>) and considering Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), we obtain:
π_C|D-π_D|C= -c+w_I b+1-w_I/k(k-1)q_C|Cb-1-w_I/k[1+(k-1)q_C|D]b
= -c+w_I b,
(π_C|D-π_D|D)q_D|D= [-c+w_I b+1-w_I/k(k-1)q_C|Cb-1-w_I/k(k-1)q_C|Db]
(1-k-2/k-1p_C)
= (-c+w_I b+1-w_I/kb)(1-k-2/k-1p_C),
(π_C|C-π_D|C)q_C|C= {-c+w_I b+1-w_I/k[1+(k-1)q_C|C]b-1-w_I/k[1+(k-1)q_C|D]b}
×(k-2/k-1p_C+1/k-1)
= (-c+w_I b+1-w_I/kb)(k-2/k-1p_C+1/k-1).
Substituting Eqs. (<ref>) and (<ref>) into Eq.(<ref>), we can express ṗ_C solely in terms of p_C:
ṗ_C=k-2/(k-1)Np_C(1-p_C)(1-w_R){
-(1+w_R)c+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b/k
}δ +𝒪(δ^2),
which bears a resemblance to the replicator dynamics in well-mixed populations. There are two equilibria, p_C^*=0 and p_C^**=1. In accordance with the standard stability analysis <cit.>, when b/c<(b/c)^⋆, the system is stable at p_C^*, and defection prevails. When b/c>(b/c)^⋆, the system is stable at p_C^**, and cooperation prevails. Here,
(b/c)^⋆=1+w_R/(k-1)w_I+(k+1)w_I w_R+1-w_Rk,
which is consistent with the results of the IBD method when N→ +∞. Finally, to rigorously complete the theoretical deduction, we proceed to calculate the fixation probability.
§.§ Fixation probability
To determine the fixation probability, one must solve the Kolmogorov backward equation <cit.>. We begin by defining the following two quantities:
E(Δ p_C)≃ [1/N𝒫(Δ p_C=1/N)+(-1/N)𝒫(Δ p_C=-1/N)]Δ t
= k-2/(k-1)Np_C(1-p_C)(1-w_R){
-(1+w_R)c+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b/k
}δΔ t
≡ m(p_C)Δ t,
Var(Δ p_C)≃ [(1/N)^2𝒫(Δ p_C=1/N)+(-1/N)^2𝒫(Δ p_C=-1/N)]Δ t
= 2(k-2)/(k-1)N^2p_C(1-p_C)(1-w_R)Δ t
≡ v(p_C)Δ t.
Subsequently, we obtain
-2m(p_C)/v(p_C)=-N/k{
-(1+w_R)kc+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b
}δ,
and
G(p_C)= exp(
-∫2m(p_C)/v(p_C) dp_C
)
= exp(
-N/k{
-(1+w_R)kc+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b
}δ p_C+C_0
)
= (
1-N/k{
-(1+w_R)kc+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b
}δ p_C
)C̃_0 +𝒪(δ^2),
where C_0 and C̃_0=expC_0 are constants arising from integral calculations.
We denote the initial cooperation level at t_0 as p_C(t_0), and the fixation probability of cooperation, starting with the cooperation level p_C(t_0), as ϕ_C[p_C(t_0)]. To determine ϕ_C[p_C(t_0)], we solve the following equation:
0=m[p_C(t_0)]dϕ_C [p_C(t_0)]/dp_C(t_0)+
v[p_C(t_0)]/2d^2ϕ_C [p_C(t_0)]/dp_C^2(t_0)
with boundary conditions ϕ_C(0)=0, ϕ_C(1)=1. The solution is
ϕ_C[p_C(t_0)]= ∫_0^p_C(t_0)G(p_C) dp_C/∫_0^1G(p_C) dp_C
= .(
p_C-N2k{
-(1+w_R)kc+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b
}δp_C^2
)C̃_0 |_0^p_C(t_0)/.(
p_C-N2k{
-(1+w_R)kc+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b
}δp_C^2
)C̃_0 |_0^1
= p_C(t_0)+p_C(t_0)[1-p_C(t_0)]
N2k{
-(1+w_R)kc+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b
}δ/1-N2k{
-(1+w_R)kc+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b
}δ
= p_C(t_0)+p_C(t_0)[1-p_C(t_0)]/2N{
-(1+w_R)c+[(k-1)w_I+(k+1)w_I w_R+1-w_R]b/k
}δ +𝒪(δ^2).
Since N→ +∞, we can approximate N_C/N≈ p_C(t_0) and (N-N_C)/(N-1)≈ 1-p_C(t_0). In this manner, the fixation probability calculated by Eq. (<ref>) is equivalent to the cooperation level given by Eq. (<ref>), that is, ϕ_C=ρ_C, in the limit of N→ +∞.
As discussed in the introductory paragraph of Section <ref>, cooperation dominates with probability p_C(t_0) under neutral drift, and evolution favors cooperation under weak selection if the probability of cooperation dominance exceeds p_C(t_0). This implies that ϕ_C[p_C(t_0)] > p_C(t_0). According to Eq. (<ref>), ϕ_C[p_C(t_0)] > p_C(t_0) necessitates
b/c>1+w_R/(k-1)w_I+(k+1)w_I w_R+1-w_Rk,
which constitutes a generalization of the well-known b/c > k rule <cit.> and is consistent with the result obtained by the IBD method.
54
natexlab#1#1
[#1],#1
[Kaiping et al.(2014)Kaiping, Jacobs, Cox, and
Sluckin]kaiping_pre14
authorG. A. Kaiping, authorG. S. Jacobs,
authorS. J. Cox, authorT. J. Sluckin,
titleNonequivalence of updating rules in evolutionary
games under high mutation rates,
journalPhys. Rev. E volume90
(year2014) pages042726.
[Roca et al.(2009)Roca, Cuesta, and Sánchez]roca_plr09
authorC. P. Roca, authorJ. A. Cuesta,
authorA. Sánchez,
titleEvolutionary game theory: Temporal and spatial
effects beyond replicator dynamics,
journalPhys. Life Rev. volume6
(year2009) pages208–249.
[Takesue(2019)]takesue_pa19
authorH. Takesue,
titleEffects of updating rules on the coevolving
prisoner's dilemma,
journalPhysica A volume513
(year2019) pages399–408.
[Nowak and May(1992)]nowak1992evolutionary
authorM. A. Nowak, authorR. M. May,
titleEvolutionary games and spatial chaos,
journalNature volume359
(year1992) pages826–829.
[Perc et al.(2017)Perc, Jordan, Rand, Wang, Boccaletti, and
Szolnoki]perc_pr17
authorM. Perc, authorJ. J. Jordan,
authorD. G. Rand, authorZ. Wang,
authorS. Boccaletti, authorA. Szolnoki,
titleStatistical physics of human cooperation,
journalPhys. Rep. volume687
(year2017) pages1–51.
[Szabó and Fáth(2007)]szabo_pr07
authorG. Szabó, authorG. Fáth,
titleEvolutionary games on graphs,
journalPhys. Rep. volume446
(year2007) pages97–216.
[Wang et al.(2015)Wang, Wang, Szolnoki, and Perc]wang_z_epjb15
authorZ. Wang, authorL. Wang,
authorA. Szolnoki, authorM. Perc,
titleEvolutionary games on multilayer networks: a
colloquium,
journalEur. Phys. J. B volume88
(year2015) pages124.
[Nowak(2006)]nowak_s06
authorM. A. Nowak,
titleFive rules for the evolution of cooperation,
journalScience volume314
(year2006) pages1560–1563.
[Szolnoki and Danku(2018)]szolnoki_pa18
authorA. Szolnoki, authorZ. Danku,
titleDynamic-sensitive cooperation in the presence of
multiple strategy updating rules,
journalPhysica A volume511
(year2018) pages371–377.
[Ohtsuki and Nowak(2006)]ohtsuki_jtb06
authorH. Ohtsuki, authorM. A. Nowak,
titleThe replicator equation on graphs,
journalJ. Theor. Biol. volume243
(year2006) pages86–97.
[Zhu et al.(2020)Zhu, Ding, Zhao, Xu, Jin, and Wang]zhu_h_pla20
authorH. Zhu, authorH. Ding, authorQ.-Y.
Zhao, authorY.-P. Xu, authorX. Jin,
authorZ. Wang,
titleReputation-based adjustment of fitness promotes the
cooperation under heterogeneous strategy updating rules,
journalPhys. Lett. A volume384
(year2020) pages126882.
[Takesue(2018)]takesue_epl18
authorH. Takesue,
titleEvolutionary prisoner's dilemma games on the network
with punishment and opportunistic partner switching,
journalEPL volume121 (year2018)
pages48005.
[Zhu et al.(2021)Zhu, Hou, Guo, Xu, and Liu]zhu_pc_epjb21
authorP. Zhu, authorX. Hou, authorY. Guo,
authorJ. Xu, authorJ. Liu,
titleInvestigating the effects of updating rules on
cooperation by incorporating interactive diversity,
journalEur. Phys. J. B volume94
(year2021) pages58.
[Traulsen et al.(2007)Traulsen, Pacheco, and Nowak]traulsen_jtb07b
authorA. Traulsen, authorJ. M. Pacheco,
authorM. A. Nowak,
titlePairwise comparison and selection temperature in
evolutionary game dynamics,
journalJ. Theor. Biol. volume246
(year2007) pages522–529.
[Allen and Nowak(2014)]allen2014games
authorB. Allen, authorM. A. Nowak,
titleGames on graphs,
journalEMS Surv. Math. Sci. volume1
(year2014) pages113–151.
[Tarnita et al.(2009)Tarnita, Antal, Ohtsuki, and
Nowak]tarnita2009evolutionary
authorC. E. Tarnita, authorT. Antal,
authorH. Ohtsuki, authorM. A. Nowak,
titleEvolutionary dynamics in set structured populations,
journalProc. Natl. Acad. Sci. U.S.A.
volume106 (year2009) pages8601–8604.
[Maciejewski et al.(2014)Maciejewski, Fu, and
Hauert]maciejewski_pcbi14
authorW. Maciejewski, authorF. Fu,
authorC. Hauert,
titleEvolutionary game dynamics in populations with
heterogenous structures,
journalPLoS Comput. Biol. volume10
(year2014) pagese1003567.
[Wild and Traulsen(2007)]wild_jtb07
authorG. Wild, authorA. Traulsen,
titleThe different limits of weak selection and the
evolutionary dynamics of finite populations,
journalJ. Theor. Biol. volume247
(year2007) pages382–390.
[Fu et al.(2009)Fu, Wang, Nowak, and Hauert]fu_pre09b
authorF. Fu, authorL. Wang, authorM. A.
Nowak, authorC. Hauert,
titleEvolutionary dynamics on graphs: Efficient method for
weak selection,
journalPhys. Rev. E volume79
(year2009) pages046707.
[Débarre et al.(2014)Débarre, Hauert, and
Doebeli]debarre2014social
authorF. Débarre, authorC. Hauert,
authorM. Doebeli,
titleSocial evolution in structured populations,
journalNat. Commun. volume5
(year2014) pages3409.
[Allen et al.(2017)Allen, Lippner, Chen, Fotouhi, Momeni, Yau, and
Nowak]allen2017evolutionary
authorB. Allen, authorG. Lippner,
authorY.-T. Chen, authorB. Fotouhi,
authorN. Momeni, authorS.-T. Yau,
authorM. A. Nowak,
titleEvolutionary dynamics on any population structure,
journalNature volume544
(year2017) pages227–230.
[Su et al.(2018)Su, Wang, and Stanley]su2018understanding
authorQ. Su, authorL. Wang, authorH. E.
Stanley,
titleUnderstanding spatial public goods games on
three-layer networks,
journalNew J. Phys. volume20
(year2018) pages103030.
[Su et al.(2019)Su, Li, Wang, and Eugene Stanley]su2019spatial
authorQ. Su, authorA. Li, authorL. Wang,
authorH. Eugene Stanley,
titleSpatial reciprocity in the evolution of cooperation,
journalProc. R. Soc. B volume286
(year2019) pages20190041.
[Fotouhi et al.(2018)Fotouhi, Momeni, Allen, and
Nowak]fotouhi2018conjoining
authorB. Fotouhi, authorN. Momeni,
authorB. Allen, authorM. A. Nowak,
titleConjoining uncooperative societies facilitates
evolution of cooperation,
journalNat. Human Behav. volume2
(year2018) pages492–499.
[Allen et al.(2019)Allen, Lippner, and Nowak]allen2019evolutionary
authorB. Allen, authorG. Lippner,
authorM. A. Nowak,
titleEvolutionary games on isothermal graphs,
journalNat. Commun. volume10
(year2019) pages5107.
[McAvoy et al.(2020)McAvoy, Allen, and Nowak]mcavoy2020social
authorA. McAvoy, authorB. Allen, authorM. A.
Nowak,
titleSocial goods dilemmas in heterogeneous societies,
journalNat. Human Behav. volume4
(year2020) pages819–831.
[Su et al.(2022a)Su, McAvoy, Mori, and
Plotkin]su2022evolution
authorQ. Su, authorA. McAvoy,
authorY. Mori, authorJ. B. Plotkin,
titleEvolution of prosocial behaviours in multilayer
populations,
journalNat. Human Behav. volume6
(year2022a) pages338–348.
[Su et al.(2022b)Su, Allen, and
Plotkin]su2022evolution_asy
authorQ. Su, authorB. Allen, authorJ. B.
Plotkin,
titleEvolution of cooperation with asymmetric social
interactions,
journalProc. Natl. Acad. Sci. U.S.A.
volume119 (year2022b)
pagese2113468118.
[Wang and Szolnoki(2023a)]wang2023inertia
authorC. Wang, authorA. Szolnoki,
titleInertia in spatial public goods games under weak
selection,
journalAppl. Math. Comput. volume449
(year2023a) pages127941.
[Wang and Szolnoki(2023b)]wang2023evolution
authorC. Wang, authorA. Szolnoki,
titleEvolution of cooperation under a generalized
death-birth process,
journalPhys. Rev. E volume107
(year2023b) pages024303.
[Szabó and Tőke(1998)]szabo_pre98
authorG. Szabó, authorC. Tőke,
titleEvolutionary prisoner's dilemma game on a square
lattice,
journalPhys. Rev. E volume58
(year1998) pages69–73.
[Ohtsuki et al.(2007)Ohtsuki, Nowak, and
Pacheco]ohtsuki2007breaking
authorH. Ohtsuki, authorM. A. Nowak,
authorJ. M. Pacheco,
titleBreaking the symmetry between interaction and
replacement in evolutionary dynamics on graphs,
journalPhys. Rev. Lett. volume98
(year2007) pages108106.
[Ohtsuki et al.(2006)Ohtsuki, Hauert, Lieberman, and
Nowak]ohtsuki2006simple
authorH. Ohtsuki, authorC. Hauert,
authorE. Lieberman, authorM. A. Nowak,
titleA simple rule for the evolution of cooperation on
graphs and social networks,
journalNature volume441
(year2006) pages502–505.
[Sigmund(2010)]sigmund_10
authorK. Sigmund, titleThe Calculus of Selfishness,
publisherPrinceton University Press, addressPrinceton,
NJ, year2010.
[Cox and Griffeath(1983)]cox1983occupation
authorJ. T. Cox, authorD. Griffeath,
titleOccupation time limit theorems for the voter model,
journalAnnals Prob. (year1983)
pages876–893.
[Cox and Griffeath(1986)]cox1986diffusive
authorJ. T. Cox, authorD. Griffeath,
titleDiffusive clustering in the two dimensional voter
model,
journalAnnals Prob. (year1986)
pages347–370.
[Nowak et al.(2010)Nowak, Tarnita, and Wilson]nowak2010evolution
authorM. A. Nowak, authorC. E. Tarnita,
authorE. O. Wilson,
titleThe evolution of eusociality,
journalNature volume466
(year2010) pages1057–1062.
[Chen(2013)]chen2013sharp
authorY.-T. Chen,
titleSharp benefit-to-cost rules for the evolution of
cooperation on regular graphs,
journalAnnals Appl. Prob. volume23
(year2013) pages637–664.
[Tarnita et al.(2009)Tarnita, Ohtsuki, Antal, Fu, and
Nowak]tarnita2009strategy
authorC. E. Tarnita, authorH. Ohtsuki,
authorT. Antal, authorF. Fu, authorM. A.
Nowak,
titleStrategy selection in structured populations,
journalJ. Theor. Biol. volume259
(year2009) pages570–581.
[Nowak and May(1993)]nowak1993spatial
authorM. A. Nowak, authorR. M. May,
titleThe spatial dilemmas of evolution,
journalInt. J. Bif. Chaos volume3
(year1993) pages35–78.
[Starnini et al.(2011)Starnini, Sánchez, Poncela, and
Moreno]starnini2011coordination
authorM. Starnini, authorA. Sánchez,
authorJ. Poncela, authorY. Moreno,
titleCoordination and growth: the Stag Hunt game on
evolutionary networks,
journalJ. Stat. Mech. volume2011
(year2011) pagesP05008.
[Wang et al.(2013)Wang, Xia, Wang, and Zhang]wang2013evolving
authorL. Wang, authorC. Xia, authorL. Wang,
authorY. Zhang,
titleAn evolving Stag-Hunt game with elimination and
reproduction on regular lattices,
journalChaos, Solitons and Fractals volume56
(year2013) pages69–76.
[Dong et al.(2019)Dong, Xu, and Fan]dong2019memory
authorY. Dong, authorH. Xu, authorS. Fan,
titleMemory-based stag hunt game on regular lattices,
journalPhysica A volume519
(year2019) pages247–255.
[Hauert and Doebeli(2004)]hauert2004spatial
authorC. Hauert, authorM. Doebeli,
titleSpatial structure often inhibits the evolution of
cooperation in the snowdrift game,
journalNature volume428
(year2004) pages643–646.
[Wang et al.(2006)Wang, Ren, Chen, and Wang]wang2006memory
authorW.-X. Wang, authorJ. Ren,
authorG. Chen, authorB.-H. Wang,
titleMemory-based snowdrift game on networks,
journalPhys. Rev. E volume74
(year2006) pages056113.
[Zhang et al.(2012)Zhang, Ning, Yin, Sun, Wang, Sun, and
Xia]zhang2012novel
authorJ.-j. Zhang, authorH.-y. Ning,
authorZ.-y. Yin, authorS.-w. Sun,
authorL. Wang, authorJ.-q. Sun,
authorC.-y. Xia,
titleA novel snowdrift game model with edge weighting
mechanism on the square lattice,
journalFront. Phys. volume7
(year2012) pages366–372.
[Su et al.(2017)Su, Li, and Wang]su2017spatial
authorQ. Su, authorA. Li, authorL. Wang,
titleSpatial structure favors cooperative behavior in the
snowdrift game with multiple interactive dynamics,
journalPhysica A volume468
(year2017) pages299–306.
[Shu et al.(2018)Shu, Liu, Fang, and Chen]shu2018memory
authorF. Shu, authorX. Liu, authorK. Fang,
authorH. Chen,
titleMemory-based snowdrift game on a square lattice,
journalPhysica A volume496
(year2018) pages15–26.
[Li et al.(2014)Li, Wu, and Wang]li2014cooperation
authorA. Li, authorB. Wu, authorL. Wang,
titleCooperation with both synergistic and local
interactions can be worse than each alone,
journalSci. Rep. volume4
(year2014) pages1–6.
[Su et al.(2019)Su, McAvoy, Wang, and Nowak]su2019evolutionary
authorQ. Su, authorA. McAvoy,
authorL. Wang, authorM. A. Nowak,
titleEvolutionary dynamics with game transitions,
journalProc. Natl. Acad. Sci. U.S.A.
volume116 (year2019) pages25398–25404.
[Taylor and Jonker(1978)]taylor1978evolutionary
authorP. D. Taylor, authorL. B. Jonker,
titleEvolutionary stable strategies and game dynamics,
journalMath. Biosci. volume40
(year1978) pages145–156.
[Karlin and Taylor(1981)]karlin1981second
authorS. Karlin, authorH. E. Taylor, titleA
second course in stochastic processes, publisherElsevier,
year1981.
[Matsuda et al.(1992)Matsuda, Ogita, Sasaki, and
Satō]matsuda1992statistical
authorH. Matsuda, authorN. Ogita,
authorA. Sasaki, authorK. Satō,
titleStatistical mechanics of population: the lattice
Lotka-Volterra model,
journalProg. Theor. Phys. volume88
(year1992) pages1035–1049.
[Ewens(2004)]ewens2004mathematical
authorW. J. Ewens, titleMathematical population
genetics: theoretical introduction, volume volume27,
publisherSpringer, year2004.
|
http://arxiv.org/abs/2306.05555v2
|
20230608205431
|
Impact of resource distributions on the competition of species in stream environment
|
[
"Tung D. Nguyen",
"Yixiang Wu",
"Tingting Tang",
"Amy Veprauskas",
"Ying Zhou",
"Behzad Djafari Rouhani",
"Zhisheng Shuai"
] |
q-bio.PE
|
[
"q-bio.PE",
"math.DS",
"92D25, 92D40, 34C12, 34D23, 37C65"
] |
155mm 225mm 15pt
0pt 0cm 0.3cm
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
definition
definition[theorem]Definition
example[theorem]Example
xca[theorem]Exercise
remark
remark[theorem]Remark
equationsection
|
http://arxiv.org/abs/2306.01653v1
|
20230602162333
|
In-situ enrichment in heavy elements of hot Jupiters
|
[
"A. Morbidelli",
"K. Batygin",
"E. Lega"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
Heavy elements of hot Jupiters
^1 Département Lagrange, University of Nice – Sophia Antipolis, CNRS, Observatoire de la Côte d'Azur, Nice, France; ^2 GPS Division, Caltech, Pasadena, California
Radius and mass measurements of short-period giant planets reveal that many of these planets contain a large amount of heavy elements. Although the range of inferred metallicities is broad, planets with more than 100 M_⊕ of heavy elements are not rare. This is in sharp contrast with the expectations of the conventional core-accretion model for the origin of giant planets.
The proposed explanations for the heavy-element enrichment of giant planets fall short of explaining the most enriched planets. We look for additional processes that can explain the full envelope of inferred enrichments.
We revisit the dynamics of pebbles and dust in the vicinity of giant planets using analytic estimates and published results on the profile of a gap opened by a giant planet, on the radial velocity of the gas with respect to the planet, on the Stokes number of particles in the different parts of the disk and on the consequent dust/gas ratio. Although our results are derived in the framework of a viscous α-disk we also discuss the case of disks driven by angular momentum removal in magnetized winds.
When giant planets are far from the star, dust and pebbles are confined in a pressure bump at the outer edge of the planet-induced gap. Instead, when the planets reach the inner part of the disk (r_p≪ 2 au), dust penetrates into the gap together with the gas. The dust/gas ratio can be enhanced by more than an order of magnitude if radial drift of dust is not impeded farther out by other barriers. Thus, hot planets undergoing runaway gas accretion can swallow a large amount of dust, acquiring ∼ 100 M_⊕ of heavy elements by the time they reach Jupiter-mass.
Whereas the gas accreted by giant planets in the outer disk is very dust-poor, that accreted by hot planets can be extremely dust-rich. Thus, provided that a large fraction of the atmosphere of hot-Jupiters is accreted in situ, a large amount of dust can be accreted as well. We draw a distinction between this process and pebble accretion (i.e., the capture of dust without the accretion of gas), which is ineffective at small stellocentric radii, even for super-Earths. Giant planets farther out in the disk are extremely effective barriers against the flow of pebbles and dust across their gap. Saturn and Jupiter, after locking into a mutual mean motion resonance and reversing their migration could have accreted small pebble debris.
In-situ enrichment in heavy elements of hot Jupiters
A. Morbidelli1, K. Batygin2, E. Lega1
[Received / accepted]
====================================================
§ INTRODUCTION
The discovery and characterization of exoplanets over the course of the last thirty years has brought a seismic shift in our comprehension of planetary formation and evolution. Nonetheless, the first objects to be discovered in large numbers – hot-Jupiters – continue to stand out as an enigmatic class of astrophysical bodies. These giants are close to their host stars (with orbital periods of less than 10 days) and despite their unexpected nature, have attracted extensive scrutiny due to their distinctive features and relative ease of observation. In particular, precise mass and radius determinations are substantially more common within the presently known census of hot-Jupiters than other types of extrasolar planets.
Conventional giant planet structure theory holds that the mass-radius relation for degenerate Jovian planets is approximately flat (i.e., mass-independent: <cit.>), meaning that their size is largely dictated by their composition. Within this framework, Jupiter's radius in first approximation corresponds to a roughly solar mixture of hydrogen and helium, meaning that any smaller radius is indicative of a substantially super-solar overall metallicity. The estimate of total mass of heavy elements, here denoted M_h, can be sharpened further through detailed modeling (that accounts for the corrections due to age, total mass, etc), and <cit.> were the first to carry out this analysis for a pool of 9 well-characterized hot-Jupiters. Intriguingly, they found that some objects have M_h on the order of ∼ 100 M_⊕, where M_⊕ denotes the mass of Earth, and pointed out an apparent correlation between the planet's M_h and the metallicity of the central star. This investigation was later extended and confirmed in <cit.>, <cit.> and <cit.>.
An important complication that arises within such analyses is that hot-Jupiters experience substantial radius inflation, such that their interiors are not in a fully degenerate state. Though a number of physical mechanisms – including tidal damping <cit.>, breaking gravity waves <cit.>, impeded cooling due to enhanced atmospheric opacity <cit.>, double-diffusive convection <cit.>, turbulent burial of atmospheric entropy <cit.> and Ohmic dissipation <cit.> – have been proposed to explain this anomalous heating within these planets' envelopes, statistical analyses <cit.> have shown that the strong dependence of the degree of inflation on stellar irradiation predicted by the Ohmic dissipation mechanism is indeed reflected in the data (see also cite: <cit.>). Consequently, application of conventional giant planet evolution models to strongly-irradiated planets can yield negative heavy element masses. In turn, this implies that the values of M_h reported in the aforementioned studies are lower bounds.
To circumvent this problem, <cit.> considered a subset of 47 giant planets that are not strongly irradiated by their central star. Within this subset of objects, anomalous heating of the interior could be reasonably assumed to be negligible, meaning that the computed values of M_h likely represent the actual masses in heavy elements and not their lower bounds (indeed, no negative values of M_h appear in Thorngren et al.'s calculations). We note that, strictly speaking, many of these planets fall into the “warm Jupiter” category because they have periods that exceed the nominal 10 day boundary, but for simplicity we still refer to these planets as hot-Jupiters.
The main result of Thorngren et al. is reproduced in Fig. <ref> and shows that many hot-Jupiters are more enriched in heavy elements than Jupiter or Saturn. Some Jupiter-mass planets exceed 100 Earth masses in heavy elements. On average, the mass in heavy elements M_h is correlated to the total planet mass M_p as:
M_h = 57.9 ± 7.0 M_⊕(M_pM_jup)^β ,
where M_jup is the mass of Jupiter. The exponent β of the M_h(M_p) correlation is 0.61 ± 0.08. Thorngren et al. also confirmed the correlation between M_h/M_p (a.k.a. the planet metallicity) and the stellar metallicity[The metallicity is usually denoted by the letter Z (and the mass in heavy elements by M_z), but we refuse using this notation to protest the Russian invasion of Ukraine, of which Z has become the symbol.]. When this correlation is accounted for, the scatter of the data around the correlation law (<ref>) is significantly reduced.
These results are surprising. According to the core-accretion theory of giant planet formation, giant planets are nucleated by the gradual accumulation of solid material into a ∼ 15 M_⊕ core, which then accretes a massive envelope of gas and small dust with approximately stellar metallicity. This process is expected to result in a range of solid-to-gas ratios for giant planets that is appreciably super-stellar but nonetheless much smaller than that given by (<ref>). For instance, a Jupiter-mass planet would be expected to have ∼ 18 M_⊕ of heavy elements. This mismatch between expectations and observations suggests that the process of hot-Jupiter formation may be more complex than originally thought.
Throngren et al. proposed an explanation for the surprising heavy element enrichment observed in hot-Jupiters. They conjectured that planets accrete all planetesimals located within their feeding zone, an annuls with a radial width proportional to the planet's Hill radius R_H=a(M_p/3M_star)^1/3 where a is the semi-major axis of the accreting planet. The exponent 1/3 is smaller than that of (<ref>) but still not grossly inconsistent with the data. <cit.> proposed a similar explanation for the enrichment in heavy elements in Jupiter and, through a more sophisticated planetesimal accretion model, predicted an exponent of 2/5 when the formation of a planetesimal gap is considered, i.e. a bit closer to the measured value of β than the estimate of Throngren et al.. The combination of accretion of gas with a 1% metallicity and the accretion of planetesimals with the M_p^2/5 relationship gives the magenta curve in Fig. <ref>, which explains some of the planets, but clearly not the majority of them. <cit.> showed that planet migration can enhance the efficiency of planetesimal accretion, due to a combination of resonant shepherding and gas-drag. This is particular efficient as the planet is migrating through specific locations of the disk (dependent on parameters). This may potentially make hot-Jupiters more metal rich than Jupiter itself.
A radically different mechanism for the heavy element enrichment of giant planets has been proposed by <cit.>, elaborating from an original idea of <cit.>. In their model, inward-drifting dust particles (a.k.a. pebbles) evaporate their volatile elements, each at a specific distance (the sublimation line of the corresponding volatile specie). Therefore, the gas in the inner part of the disk gets enriched in vapor of volatile elements, by a substantial amount for some disk parameters. Because a planet accretes H, He and heavier element vapors indiscriminately, it can be enriched in heavy volatile elements through this process.
It is worth noting that this model is not restricted to hot-Jupiters, and applies to any planet accreting a substantial fraction of its gaseous envelope in the inner part of the protoplanetary disk, where volatile elements are in vapor state. Nevertheless, this model falls a bit short of reproducing correlation (<ref>). The masses M_h are typically below the value predicted by (<ref>) up to planets with M_p=2 M_Jup and can reach the observed mean values only for more massive planets formed in the most metallic disks. Planets with M_h>100 M_⊕ are not expected in the Schneider and Bitsch model unless their total mass is larger than ∼ 3 M_Jup.
Here we propose that the heavy element enrichment of hot-Jupiters can be explained by the accretion of very dust-polluted gas in the inner part of the disk (essentially in-situ; <cit.>). This process is not in contradiction to the picture proposed by Schneider and Bitsch but introduces a previously overlooked effect. It can explain the enrichment in refractory elements, whereas Schneider and Bitsch predict an enrichment only in volatile elements.
It is generally expected that the gas accreted by a giant planet is metal-poor. This is because dust coagulation converts most of the solid mass into pebble-size objects, which are moderately coupled to the gas; pebble cannot be accreted by planets exceeding ∼ 20 – 40 M_⊕ (exact value depending on disk's viscosity and scale height; <cit.>) because they remain trapped in the pressure bump produced at the outer edge of the planet-induced gap.
In this manuscript we will show that, while this is true for giant planets in the central/outer parts of the disk, at small orbital radii pebbles readily permeate the gap of close-in giant planets due to two important factors: (1) the gas flows through the planet's gap in the outside-in direction if the planet is sufficiently close to the star, whereas it flows through the gap in the inside-out direction if the planet is farther out <cit.> and (2) pebbles have very small Stokes numbers in the inner part of the disk <cit.> and therefore can be entrained into the gap by the radial flow of the gas despite the existence of a pressure bump at the gap's outer edge. Pebbles are expected to fragment while flowing into the gap, thus maintaining a small Stokes number even inside the gap. The strong coupling with the gas then makes the accretion of solids possible only in conjunction with the accretion of gas. Nevertheless, this can increase considerably the planet's heavy element budget because of the very high dust/gas density ratio that can be achieved in the inner disk due to the rapid drop in the dust/gas radial velocity ratio.
For simplicity, we elaborate on this process in section <ref> by adopting a viscous α-disk model, but we then discuss in section <ref> how the process is affected if the radial transport of gas is mostly due to angular momentum removal in disk winds and/or occurs only near the surface of the disk. Thus, we will present a comprehensive revision of the problem of the interaction of a gap-opening planet with the flow of gas and dust or pebbles, hopefully correcting some misconceptions that can be often found in the literature.
This manuscript will end with a discussion on super-Earths and on the case of Jupiter and Saturn in section <ref>, before the classic summary of the conclusions in section <ref>.
§ PEBBLE DYNAMICS IN THE VICINITY OF A GIANT PLANET IN AN Α-DISK MODEL
The dynamics of a pebble in a protoplanetary disk is dictated by its coupling with the gas. Due to gas drag, the pebble's velocity v relative to the gas velocity u is damped following the equation:
dv dt = -1t_f (v-u)
where t_f is called the friction timescale. The Stokes number S_t of a pebble is the value of its friction timescale in units of the local orbital timescale:
S_t= t_f Ω
where Ω is the local orbital frequency.
The radial velocity of a pebble is given by
v_r = u_r1+ S_t^2 - 2 S_t (v_θ - u_θ) ,
<cit.>, where the r and θ subscripts denote the radial and azimuthal components of the velocities. This formula does not account for the back-reaction of dust over gas <cit.>.
For a pebble on a circular orbit, the difference in azimuthal velocities v_θ - u_θ is a fraction η (depending on r) of the Keplerian velocity v_K, where
η(r) = - 1 2(H r)^2 ∂log P∂log r ,
H is the scale height of the disk and P= (HΩ)^2 Σ /(√(2π) H) is the internal pressure of the gas. Assuming H∝ r, i.e. neglecting disk flaring, (<ref>) can be approximated by
η(r) = - 1 2(H r)^2 [rΣ∂Σ∂ r -2] ,
which is the equation we will be using in the rest of this paper.
Eq. (<ref>) reveals that, where Σ monotonically decays with r, η>0. In this case, the azimuthal drag of the gas on the pebble leads to the star-ward radial drift of the pebble. However, where Σ has a sufficiently positive radial gradient, η<0 and the drag is reversed. The location where η=0 is called a pressure bump.
Giant planets open deep gaps in the gas distribution of the disk. Along the outer edge of the gap ∂Σ/∂ r is positive and large, so that η<0, whereas far from the planet's orbit η>0. So, a pressure bump is established whenever a giant planet forms. If the radial velocity of the gas u_r > 0 then pebbles cannot drift into the gap, whatever their Stokes number. If instead u_r < 0 only particles with S_t > min[u_r(r)/2η(r)v_K] don't penetrate the gap, where the minimum is computed for r ranging from one to multiple Hill radii beyond the planet's orbit.
In most hydrodynamical studies on gap opening by giant planets published in the literature, the giant planet is kept on a fixed orbit. The gas is then observed to flow through the gap, from the outer part of the disk to the inner part <cit.>. Thus, Weber et al. computed that only particles with S_t>10^-3 do not penetrate a gap opened by a Jupiter-mass planet, for the disk parameters used in their nominal simulations. Consequently, it is often considered in the literature that the so-called planet barrier against the radial drift of pebbles is effective only for pebbles with a Stokes number larger than this order of magnitude (e.g. <cit.>).
However, the situation is radically different if the planet is allowed to migrate, instead of being kept artificially on a fixed orbit. It has been shown <cit.> that, although the migration speed of giant planets is proportional to disk viscosity, giant planets migrate faster than the unperturbed radial velocity of the gas (which, in a viscous disk is u_r =-3/2 (ν/r), where ν is the disk's viscosity, expressed as ν=α H^2Ω in the so-called α-disks) when Σ(r_p)r_p^2/M_p > 0.2, where Σ(r_p) is the gas' unperturbed surface density at the radial distance of the planet r_p. In this case, the gas flows through the gap in the inside-out direction (even if the radial motion of the gas in an absolute reference frame can remain negative). Instead, the planet migrates significantly slower than the radial motion of the gas, allowing gas to pass through the gap in the outside-in direction, if
Σ(r_p)r_p^2M_p << 0.2
<cit.>. Because of the r_p^2 dependence of this formula and the typical weak radial decay of Σ (usually proportional to 1/√(r)) this happens only when the planet is in the very inner part of the disk. For a typical disk with surface density 175 g/cm^2/√((r/5.2 au)) (delivering a stellar accretion rate of ∼ 4× 10^-8 M_⊙/y for α=3× 10^-3) and a Jupiter-mass planet, condition (<ref>) translates to r_p << 2.5 au <cit.>, which is well satisfied by the planets studied in <cit.>.
Obviously, in a reference frame co-moving radially with the planet, u_r >0 in the first case (inside-out flow through the gap) and u_r <0 in the second case. This means that the planet barrier is effective for particles of all sizes and Stokes numbers when a giant planet migrates in the central or outer parts of the disk[only turbulent diffusion in principle can allow some particles to pass through the planet barrier but it has to operate against the gas flow so it is expected to be highly inefficient.], whereas the barrier starts to be leaky when the planet reaches the vicinity of the star.
In the following we present some quantitative estimates on the Stokes number of pebbles that can penetrate a gap opened by a giant planet, once the latter is close enough to the star for condition (<ref>) to be true.
§.§ Searching for a dust trap: are particles characterized by a size or a Stokes number?
Before computing the critical Stokes number below which pebbles can penetrate the gap, we need to determine whether particles should be characterized by a given size or a given Stokes number, whatever their position in the disk. In fact, if particles are characterized by a given S_t, we can apply (<ref>) computing u_r and η as function of r and setting S_t^crit = min[u_r(r)/2η(r) v_K]. If instead particles are characterized by a given size, the Stokes number depends on the gas surface density Σ as S_t= √(2π)ρ_p RΣ, where ρ_p is the bulk density of the pebble of radius R; thus, we need to set S_t(r)=S_t^0 Σ(r_0)/Σ(r), where S_t^0 is the Stokes number of the pebble at a reference location r_0 and Σ(r) is the actual density of the disk due to the presence of the gap, and solve (<ref>) with respect to S_t^0.
Particles continuously collide with each other, and break or coagulate depending of their collision speed. If the size of particles is limited by the fragmentation barrier, they achieve at collisional equilibrium a Stokes number given by <cit.>:
S_t^coll.eq= v_frag^2/(3 α c_s^2)
where v_frag is the velocity threshold for fragmentation and c_s is the sound speed. Eq. (<ref>) is independent on the gas surface density Σ, if not indirectly through c_s∝Σ^1/8, a weak dependence that we will neglect in the following for simplicity[c_s∝√(T), where T is the gas temperature. The value of the temperature is dictated by the balance between the energy released by accretion of gas towards the star, which is constant through the gap by conservation of mass flow, and cooling, which is proportional to T^4/(Σ f_dustκ_dust) where f_dust is the dust/gas surface density ratio and κ_dust is the opacity of dust. Thus, T∝Σ^1/4 and c_s∝Σ^1/8.]. Eq. (<ref>) applies if the timescale over which a particle experiences a change in Σ is longer than the collision timescale with other particles.
In the search for a dust trap at some location along the Σ-gradient characterizing the gap, it is correct to assume that the particle's Stokes number is that given by (<ref>). In fact, when a particle is trapped, it has the time to experience collisions and grind down until its Stokes decreases to S_t^coll.eq. Thus, the trapping can be permanent only if the condition v_r=0 in (<ref>) holds for S_t=S_t^coll.eq. Searching for a dust-trap location assuming a fixed dust size is not a valid approximation.
§.§ Evaluating the velocity of the gas
Having determined that S_t is roughly constant and that the critical Stokes number below which particles can penetrate into the gap is S_t^crit = min[u_r(r)/2η(r) v_K], we now proceed to evaluate η(r) and u_r(r).
We start from the gap profile formula provided in <cit.>, which gives an expression for η as a function of Δ=(r-r_p)/R_H, α and q=M_p/M_star:
η= -1 2[0.4 q^2 r_p^4(1R_hΔ)^43 2α+R_h/r_p8Δ+200R_h/r_pΔ^10-2(H r)^2]
where, with respect to formula (14) in Crida et al. we have retained only the term corresponding to (<ref>), assumed r∼ r_p and retained only the dependence on Δ.
Formula (<ref>) gives a minimum of η of -0.024 at Δ=2.2 for q=1× 10^-3, α=3× 10^-3 and H/r=0.05, in good agreement with the nominal hydrodynamical simulation of <cit.> (-0.03; see their Fig. 2). Instead, for q=7.6× 10^-5, α= 10^-3 and H/r=0.05, formula (<ref>) gives η=-0.009, whereas <cit.> found η=0. It is well known that the model of Crida et al. is not very accurate for planets of moderate mass. Thus, we introduce an empirical correction to (<ref>) by dividing the first term in the [.] by 3.5 (7.6× 10^-5/q + 0.2).
Concerning the radial velocity of the gas u_r we remark that, due to the conservation of radial mass flow, one has u_r=u_r^0 Σ^0(r)/Σ(r), where u_r^0=-3/2 α (H/r)^2 v_K is the gas radial velocity in the unperturbed α-disk. Notice that this implicitly assumes that the radial migration of the planet is much slower than -3/2 α (H/r)^2 v_K, i.e. that condition (<ref>) holds true. Otherwise we should use the relative velocity u_r^0=-3/2 α (H/r)^2 v_K -v_r^P, where v_r^P is the radial migration velocity of the planet. Again, if u_r^0>0 there is no possibility for particles with any S_t to penetrate into the gap.
To evaluate Σ(r), and then u_r, we turn again to the analytic gap model of <cit.>. One has from (<ref>):
Σ(r)=Σ^0(r)+∫_r^∞ 2 Σ(r) r(1-η(r)(H/r)^2) dr .
Eq. (<ref>) is implicit, but it can be solved iteratively, starting from r=∞ and setting Σ(∞)=Σ^0(∞). Of course a physical approximation of infinity is 10R_H or so, i.e. well beyond the planet's gap.
Fig. <ref> shows S_t(r), solution of v_r=0 in (<ref>) in the nominal case considered by <cit.>. The critical Stokes number S_t^crit is the minimum of S_t(r) and is ∼ 10^-3, in good agreement with Weber et al. numerical results.
Fig. <ref> shows a map of the critical Stokes number as a function of q and α for H/r=5% (top) and 3% (bottom). We remark that the two panels are quite similar, revealing a weak dependence on H/r. It is also worth noticing that, for a given value of α, the critical Stokes number does not monotonically decrease with increasing planet's mass, as one could naively expect. This is due to the fact that, although the pressure bump becomes stronger with increasing q, -u_r(r) increases at any value of r because the gap becomes wider and deeper. Consequently, the value S_t^crit is achieved farther away from the planet and can result bigger than that computed for a smaller planet. According to Fig. <ref>, for H/r=0.05 the planets that oppose the most severe barriers to pebble drift are those of approximately Saturn's mass (log q ∼ -3.5).
It is clear from Fig. <ref> that, even for the least viscous (α∼ 10^-4) and shallow (H/r=0.03) disks, giant planets' gaps are leaky for pebbles with S_t≲ 10^-4. This value is small, but it is the characteristic Stokes number of pebbles that reach the fragmentation limit (<ref>) in the inner part of the protoplanetary disk, where condition (<ref>) is fulfilled (Fig. <ref>). In particular, when planets approach the very inner part of the disk where the magneto-rotational instability (MRI) is active, the α parameter is expected to increase from 10^-4 towards several times 10^-3 <cit.>. If α∼ 10^-3, Fig. <ref> shows that pebbles have S_t<10^-4, which is significantly smaller than S_t^crit (Fig. <ref>).
Thus, we conclude that when giant planets migrate near the central star, eventually pebble isolation disappears and planets can potentially feed from the full radial flow of solid (refractory) material. The efficiency of the accretion process is discussed next.
§.§ Once in the gap: accretion on the planet
Particles with S_t<S_t^crit penetrate into the gap. For these particles the evolution of the Stokes number is not obvious. In fact, it is important to remember that Eq. (<ref>) only applies if the timescale over which a particle experiences a change in Σ is longer than the collision timescale with other particles. Because the particles radial speed increases as they go into the gap as Σ^0(r)/Σ(r), where Σ^0(r) is the unperturbed surface density and Σ(r) the actual density, it is possible that the migration timescale becomes shorter, so that particles preserve their sizes and increase in Stokes number. By comparison of the timescale of unperturbed radial motion of particles coupled to the gas, i.e T_drift=r/u_r=2/[3 α (H/r)^2 Ω], with the particle collision timescale T_coll=1/[f_dustΩ] and adopting nominal parameters (α=10^-3, (H/r)=0.05 and f_dust=0.01), we find that drift timescale becomes shorter than the collision timescale only in gaps that are at least three orders of magnitude deep, namely for planets significantly more massive than Jupiter.
If the gas is very dust-rich, as argued below (Sect. <ref>), the particle collision timescale becomes even shorter. However, for large values of f_dust the stirring of dust by turbulent diffusion in the gas is reduced by their collective inertia (see <cit.> for laboratory experiments and Sect. 4 of <cit.> for an analytic derivation). This lowers the impact velocity and raises the value of the Stokes number at the fragmentation threshold.
Nevertheless we consider it unlikely that the Stokes number of particles can increase by orders of magnitude. Thus, particles should remain very coupled to the gas (e.g. S_t ∼ 10^-4 – 10^-3). The usual formulæ for pebble accretion cannot be applied in our case because they are derived for small planets that do not accrete (nor perturb) the surrounding gas. Instead, the dynamics of gas in the vicinity of a giant planet is very perturbed. Hydrodynamical simulations (e.g. <cit.>) show that part of the gas entering the planet's Hill sphere is accreted into a bound atmosphere, while some merely passes through the Hill sphere with a residence timescale typically shorter than the keplerian orbital period around the planet itself. Given the small Stokes number, particles coming with the accreted flow of gas will also be accreted in the envelope, whereas those carried by the unbound flow will not have enough time to decouple from the background flow and will be eventually transported away. Although solid particles can be accreted in this regime, they do so along with the gas. For this reason, it would be misleading refer to this process as pebble-accretion, since this term specifically refers to the selective accretion of dust over gas.
The accretion of dust together with gas may suggest that the overall planet metallicity cannot increase in this process. However, we show below that the gas in the inner part of a planetary disk can be very dust-rich, with f_dust that can approach unity.
§.§ Metallicity of the inner part of the disk
In steady state, if dust drifts radially in the disk at the speed v_r and gas at the speed u_r, the dust/gas ratio f_dust is given by:
f_dust(r)=f_dust(r_0) u_r v_r(r) v_r u_r(r_0)
where r_0 is a reference distance in the disk. Correspondingly, from (<ref>) we have:
v_r u_r = 1 + 2 S_t ηv_K u_r = 1 + 4 3 S_t ηα^-1(H r)^-2
where for u_r we have assumed the usual formula for an unflared α-disk.
To fix ideas, we assume H/r=0.05 and η=0.003 <cit.>.
Following <cit.>, we also assume
α(r)= 10^-2-10^-4 2[1-tanh(10-1 r)]+10^-4 ,
where we have adopted a 1/r dependence of the temperature and a critical temperature of T_MRI∼ 1,000 K to activate the MRI at 0.1 au. Finally, for S_t we use the value derived from a self-consistent description of an α-disk in <cit.>:
S_t(r)=10^-4 r^9/10(10^-3α)^4/5 .
Using (<ref>) and (<ref>), the resulting dust/gas radial velocity ratio (<ref>) is illustrated in Fig. <ref>. Recall from (<ref>) that the dust/gas ratio, a.k.a. gas metalliticy, is inversely proportional to (<ref>). Thus, in the inner disk, the metallicity can be ∼ 2.5 times higher than at 1 au and ∼ 6 times higher than at 3 au.
To use this result, we need to anchor the dust/gas ratio somewhere in the disk. In a steady-state scenario, where the dust radial velocity is dominated by the -2 S_t η v_K term in (<ref>) the dust to gas ratio is (<cit.>, formula 46):
f_dust = 2× 10^-3(S_t 0.1)^-1(t 1 My)^-1/3 ,
where t is the age of the disk. We recognize the inverse dependence on S_t (itself proportional to r^9/10 -see eq. <ref>) beyond ∼ 0.3 au in Fig. <ref>. Formula (<ref>) is valid only until the pebble formation front r_pf∼ 50 au (t/ 1 My)^7/3 <cit.> reaches the outer edge of the disk. After this event, the dust flux drops and so drops also the dust/gas ratio. Even before that time, there may be obstacles to dust drift, due to gaps opened by distant giant planets <cit.> or the formation of dust-trapping rings due to non-ideal MHD effects <cit.>. Until all these limitations become real, formula <ref> predicts f_dust=0.2 at 1 au, where S_t ∼ 10^-3, and t∼ 1 My, i.e. enhanced by a factor 50 with respect to solar metallicity. However, the enhancement has to be limited by planetesimal formation. For S_t≳ 10^-3 the streaming instability converts dust into planetesimals when the dust/gas volume density ratio on the midplane is of order unity <cit.>. Depending on the disk viscosity and its ability to stir the vertical distribution of dust (usually encapsulated in the so-called Schmidt number S_c) the vertical scale height of the dust layer can be ∼ 1/10 to ∼ 1/3 that of the gas disk (i.e. for α=10^-4 and S_c=10 ans 1 respectively); consequently, when f_dust exceeds ∼ 0.1–0.3 planetesimal formation is expected to convert the dust excess into macroscopic bodies. For this reason, we assume that f_dust at 1 au cannot exceed these values. Therefore we predict that, in the best case scenario, f_dust in the region of warm and hot-Jupiters, where planetesimal formation is inhibited by the Stokes number being ≪ 10^-3, can be 0.3 to 1 respectively, i.e. enriched in metallicity by a factor 30 to 100. The dust/gas ratio remains nevertheless smaller than one, which justfies the use of Eq. (<ref>) for the dust velocity, even if it neglects the back-reaction of dust on gas.
The green curve in Fig. <ref> shows the mass in heavy elements expected for giant planet accreting most of their atmosphere in the inner disk, from a gas enhanced in metallicity by a factor 30. Notice that we don't expect all planets to lay on this curve; the curve illustrates how metal-rich hot-Jupiters can become under the most favorable conditions. Correspondingly, a large scatter of values are expected, depending on the actual metallicity of gas and the fraction of the planet's atmosphere accreted in the inner disk.
§ WIND-DRIVEN DISKS AND LAYERED ACCRETION
One-dimensional α-disk models – of the type we have employed thus far – are routinely adopted for their simplicity and the ability to obtain analytical estimates. Their realism, however, is diminished in part by the fact that they do not account for other modes of angular momentum transfer within the system. In this section, we address this drawback by describing what we expect to happen if the flow of gas towards the central star ensues due to angular momentum removal in disk winds, and if the transport of gas occurs predominantly on the surface layer of the disk. This is typical of disks where Ohmic diffusion dominates <cit.>, i.e. in the inner disk region (0.1 < r < 1 au, <cit.>), but can also occur in viscous disks, due to the meridional circulation unveiled in <cit.> or if there is a deadzone near the midplane <cit.>.
Let's start with wind-driven disks. A main difference with the α-disk considered in the previous section is the radial dependence of the gas radial velocity u_r. As we have seen above, in an α-disk u_r scales with v_K. Thus, the balance between radial drag and pressure bump (see eq. <ref>) is independent of the distance from the star and only depends on the particles' Stokes number. Particles penetrate into the gap in the inner part of the disk because their Stokes number is smaller there. In a wind-driven disk, instead, u_r scales as 1/r, i.e. increases faster than v_K as the gas approaches the star <cit.>. This is because the wind removes angular momentum only from a sufficiently ionized layer and ionization depends on the amount of gas encountered by radiation as it penetrates from the surface of the disk towards the mid-plane. Thus, the vertically-integrated column density of gas of the “active” layer of the disk, where the gas flows towards the central star, is independent of r. Then, conservation of mass-flow implies u_r∝ 1/r. In this case, it is even easier for particles to penetrate into a planet-carved gap in the inner disk, because the radial entrainment due to the gas radial velocity is stronger. For the same reason, the condition for outside-in flow of gas across the planet gap (a necessary condition for particles of any size to reach the planet) is no longer given by (<ref>) and in general can be fulfilled farther out in the disk <cit.>. To this end, <cit.> reported an acceleration of the gas radial velocity near the gap due to a concentration of magnetic field lines there. In other words, the scenario envisioned above appears even more favorable for efficient accretion of dust by hot-Jupiters if the disk's evolution is driven by magnetized winds.
The fact that the flow of gas towards the star occurs near the surface in wind-driven disks (and also in some 2D α-disks models, with or without dead zone) is not an obstacle to dust accretion. The small Stokes number of particles previously considered (S_t∼ 10^-4) ensures that the dust is well coupled to the gas and uniformly distributed in the vertical direction of the disk, unless the turbulent vertical stirring is pathologically low (α/S_c ≪ 10^-4), which is unlikely, particularly in the inner disk. Under these conditions, the flow of dust from a given disk layer into the gap prompts the vertical redistribution of the dust. This prevents the dust from accumulating indefinitely on the midplane, even if the pressure bump operates there.
This argument can be formalized. To fix ideas, let's imagine a disk of gas where u_r=0 everywhere but in a near–surface layer, where u_r is large enough to transport the dust in that layer into the gap. The vertically integrated radial mass-flow of dust towards the pressure bump is
Ṁ_r= 4π r η v_K S_t Σ_d ,
where η∼ 3× 10^-3 is the value of η beyond the pressure bump and Σ_d=f_dustΣ_g is the surface density of the dust. The vertical flow of dust to restore a vertically uniform distribution of the dust/gas ratio is
Ṁ_z= 2 π r Δ r D ρ_g ∂∂ z(ρ_dρ_g) ,
where ρ_d(z) and ρ_g(z) are the volume densities of dust and gas at height z and Δ r is the width of the ring where the dust tends to be concentrated due to the pressure bump on the mid-plane. The latter is Δ r= rΔ w_0 √(α/S_t) <cit.>, where Δ w_0 is the radial width of the pressure bump in normalized units. In (<ref>) D=α H^2 Ω is the diffusion coefficient. Because on the surface layer of the disk the dust flows into the gap, ρ_d is reset to f_dustρ_g there and therefore we can approximate ∂/∂ z(ρ_d/ ρ_g)∼ 1/H [(ρ_d/ρ_g)_z=0-f_dust].
In a steady state (<ref>) and (<ref>) have to be equal. Approximating ρ_g∼Σ_g/(2H), this gives:
(ρ_dρ_g)_z=0= [ 1+4 (S_tα)^3/2ηΔ w_0] f_dust .
The term S_t/α is of order unity, as required for a uniform vertical dust distribution. The ratio η/Δ w_0 is typically much smaller than 1/4, Δ w_0 being ∼ 0.1 for a pressure bump induced by a Jupiter-mass planet <cit.>. Thus, the dust/gas ratio on the midplane at the pressure bump is only moderately increased, by a factor (1+4 η/Δ w_0)<2 with respect to the local, vertically integrated disk metallicity f_dust. Once this moderate enrichment is achieved, a steady state dust flux is set and all the net dust flow is carried into the gap, preventing any further accumulation of dust at the pressure bump in the midplane. In particular, it is unlikely that planetesimals would start to form in the mid-plane near a giant planet gap if they could not form in absence of the planet.
§ THE CASE OF SUPER-EARTHS AND OF JUPITER AND SATURN IN THE SOLAR SYSTEM
If giant planets cannot block the flux of dust in the inner part of the disk, the case is considerably more hopeless for super-Earths, as they open much shallower gaps. This does not imply that close-in super-Earth grew efficiently in situ by pebble accretion. The small Stokes number of the dust and their uniform vertical distribution in the disk makes pebble accretion an inefficient 3D process <cit.>. Indeed, as we have seen above, even for giant planets the accretion of dust has to occur together with the accretion of gas. Given that super-Earths, by definition, accreted only moderate quantities of gas, we don't expect that the accretion of dust delivered a substantial fraction of a super-Earth's solid mass. Planets with M_p∼ M_h∼ 70 M_⊕, as visible in Fig. <ref>, are not reproduced in our model starting from a 15 M_⊕ core, as the green curve in the figure shows. Indeed, within the framework of our picture, these objects require the accretion of a large amount of planetesimals or mutual merging of multiple super-Earths of smaller masses.
Jupiter and Saturn are also enriched in heavy elements relative to solar metallicity. As individual planets in the outer disk, condition (<ref>) would not be satisfied because the gas would flow through their gaps in the inside-out direction. However, Jupiter and Saturn have the tendency to lock in a mean motion resonance within the nebula, which halts or reverses their migration direction <cit.>. Once this happens, the gas flows in the outside-in direction through their common gap, carrying small-enough dust with it. The typical Stokes number of particles at ∼ 5 au should be much larger than the threshold of 10^-4– 10^-3 for transport into the gap (Fig. <ref> and <cit.>), but particle fragmentation at the pressure bump can produce a small-end tail in the particle size distribution with Stokes numbers smaller than this threshold <cit.>. These particles would be accreted by the planets with the same efficiency as gas (i.e. up to 90%; <cit.>)[This implies that only a minority of these small particles reached the inner part of the disk, unlikely to contaminate the so-called solar system isotopic dichotomy <cit.>.]. However, it is unlikely that the presence of these particles would have made the metallicity of the accreted gas super-solar given that, as fragments, they don't represent the bulk of the solid mass and the outer disk metallicity is not expected to be substantially enriched, unlike in the inner disk (Sect. <ref>). Thus we conclude that the enrichment in heavy elements of Jupiter and Saturn is likely due to accretion of planetesimals <cit.> and volatile-element vapors <cit.>.
§ CONCLUSIONS
In this work we have analyzed the dynamics of particles near the outer edge of a gap opened by a giant planet in the gas radial distribution. The edge of a gap is a pressure bump but it also enhances the radial velocity of the gas. Giant planet Type-II migration towards the star is typically faster than the radial velocity of gas in the outer part of the disk, and slower in the inner part. Thus, in the outer disk, giant planets are effective barriers against the flow of dust of any size, because both the positive radial motion of the gas relative to the planet and the pressure bump prevent particle drift into the gap. Instead, in the inner part of the disk the radial flow of gas relative to the planet is in the outside-in direction and can entrain particles with small Stokes number into the gap. We find that for the radial entrainement to ensue despite the existence of a pressure bump, the particles' Stokes number has to be smaller than 10^-4–10^-3, depending on planet mass, disk viscosity and scale height. For particles whose size is limited by the velocity fragmentation threshold, the Stokes number scales inversely with the square of the gas sound-speed and therefore it decreases rapidly in the inner disk. Thus, in the inner disk all conditions are met for typical particles to flow into the gaps opened by giant planets.
As an aside, this implies that the so-called inside-out planet formation model <cit.> is unlikely to be operational. In fact that model relies on the ability of the first planet (either formed at or migrated to the inner edge of the disk) to create a pressure bump where drifting dust particles accumulate until forming a second planet and so on. If even a giant planet is not able to block the drift of particles in the innermost part of the disk, this process should not promote the formation of super-Earth systems.
Returning to giant planet metal-enrichment, we showed that particles, once in the gap, can be accreted by a planet only together with the gas, because of the smallness of the Stokes number. This does not, however, mean that the accreted material has stellar metallicity. In fact, if the dust is allowed to freely drift towards the star (i.e. it is not blocked farther out by a dynamical barrier), it naturally piles up in the inner disk enhancing the local metallicity by an order of magnitude or more. For this reason, if hot-Jupiters accreted a substantial fraction of their envelope in-situ (see <cit.>, but these works should be actualized to account for the high opacity of the dust-rich gas demonstrated in this paper), the large enrichments in heavy elements deduced from measurements of the planet's mass and radius could have been achieved by this process.
Although we have based our analysis on an α-disk model, we showed that it also holds if the flow of gas in the disk is dominated by angular momentum removal in magnetized winds. We also showed that, because of the small Stokes number, the transport of gas and dust into the gap in a surface layer of the disk is sufficient to prevent dust pile-up at the outer edge of the gap and ensures that, in a steady state, the full dust flux crosses the orbit of the planet. To sum up, this study has explored the multifaceted relationship between particles, gas flow and giant planet migration near gap edges, contributing to a better understanding of the conditions under which particles can enter these gaps. Our findings may help inform future research on the heavy element enrichment observed in hot-Jupiters.
§ ACKNOWLEDGMENTS
A.M. is grateful for support from the ERC advanced grant HolyEarth N. 101019380 and to Caltech for the visiting professor program that he could benefit from. K. B. is grateful to Caltech's center for comparative planetology, the David and Lucile Packard Foundation, and the National Science Foundation (grant number: AST 2109276) for their generous support.
[Armitage(2011)]2011ARA A..49..195A Armitage, P. J. 2011, , 49, 195. doi:10.1146/annurev-astro-081710-102521
[Aoyama & Bai(2023)]2023ApJ...946....5A Aoyama, Y. & Bai, X.-N. 2023, , 946, 5. doi:10.3847/1538-4357/acb81f
[Batygin & Stevenson(2010)]2010ApJ...714L.238B Batygin, K. & Stevenson, D. J. 2010, , 714, L238. doi:10.1088/2041-8205/714/2/L238
[Batygin et al.(2016)]2016ApJ...829..114B Batygin, K., Bodenheimer, P. H., & Laughlin, G. P. 2016, , 829, 114. doi:10.3847/0004-637X/829/2/114
[Batygin & Morbidelli(2020)]2020ApJ...894..143B Batygin, K. & Morbidelli, A. 2020, , 894, 143. doi:10.3847/1538-4357/ab8937
[Batygin & Morbidelli(2022)]2022A A...666A..19B Batygin, K. & Morbidelli, A. 2022, , 666, A19. doi:10.1051/0004-6361/202243196
[Birnstiel et al.(2009)]2009A A...503L...5B Birnstiel, T., Dullemond, C. P., & Brauer, F. 2009, , 503, L5. doi:10.1051/0004-6361/200912452
[Bitsch et al.(2014)]2014A A...570A..75B Bitsch, B., Morbidelli, A., Lega, E., et al. 2014, , 570, A75. doi:10.1051/0004-6361/201424015
[Bitsch et al.(2015)]2015A A...575A..28B Bitsch, B., Johansen, A., Lambrechts, M., et al. 2015, , 575, A28. doi:10.1051/0004-6361/201424964
[Bitsch et al.(2018)]2018A A...612A..30B Bitsch, B., Morbidelli, A., Johansen, A., et al. 2018, , 612, A30. doi:10.1051/0004-6361/201731931
[Bodenheimer et al.(2000)]2000Icar..143....2B Bodenheimer, P., Hubickyj, O., & Lissauer, J. J. 2000, , 143, 2. doi:10.1006/icar.1999.6246
[Bodenheimer et al.(2003)]2003ApJ...592..555B Bodenheimer, P., Laughlin, G., & Lin, D. N. C. 2003, , 592, 555. doi:10.1086/375565
[Bryden et al.(1999)]1999ApJ...514..344B Bryden, G., Chen, X., Lin, D. N. C., et al. 1999, , 514, 344. doi:10.1086/306917
[Burrows et al.(2007)]2007ApJ...661..502B Burrows, A., Hubeny, I., Budaj, J., et al. 2007, , 661, 502. doi:10.1086/514326
[Chabrier & Baraffe(2007)]2007ApJ...661L..81C Chabrier, G. & Baraffe, I. 2007, , 661, L81. doi:10.1086/518473
[Chatterjee & Tan(2014)]2014ApJ...780...53C Chatterjee, S. & Tan, J. C. 2014, , 780, 53. doi:10.1088/0004-637X/780/1/53
[Crida et al.(2006)]2006Icar..181..587C Crida, A., Morbidelli, A., & Masset, F. 2006, , 181, 587. doi:10.1016/j.icarus.2005.10.007
[Dipierro et al.(2018)]2018MNRAS.479.4187D Dipierro, G., Laibe, G., Alexander, R., Hutchison, M. 2018. Gas and multispecies dust dynamics in viscous protoplanetary discs: the importance of the dust back-reaction. Monthly Notices of the Royal Astronomical Society 479, 4187–4206. doi:10.1093/mnras/sty1701
[Drażkowska et al.(2016)]2016A A...594A.105D Drażkowska, J., Alibert, Y., & Moore, B. 2016, , 594, A105. doi:10.1051/0004-6361/201628983
[Duffell et al.(2014)]2014ApJ...792L..10D Duffell, P. C., Haiman, Z., MacFadyen, A. I., et al. 2014, , 792, L10. doi:10.1088/2041-8205/792/1/L10
[Dullemond et al.(2018)]2018ApJ...869L..46D Dullemond, C. P., Birnstiel, T., Huang, J., et al. 2018, , 869, L46. doi:10.3847/2041-8213/aaf742
[Dürmann & Kley(2015)]2015A A...574A..52D Dürmann, C. & Kley, W. 2015, , 574, A52. doi:10.1051/0004-6361/201424837
[Flock et al.(2016)]2016ApJ...827..144F Flock, M., Fromang, S., Turner, N. J., et al. 2016, , 827, 144. doi:10.3847/0004-637X/827/2/144
[Griveaud et al.(2023)]2023arXiv230304652G Griveaud, P., Crida, A., & Lega, E. 2023, arXiv:2303.04652. doi:10.48550/arXiv.2303.04652
[Guillot & Showman(2002)]2002A A...385..156G Guillot, T. & Showman, A. P. 2002, , 385, 156. doi:10.1051/0004-6361:20011624
[Guillot et al.(2006)]2006A A...453L..21G Guillot, T., Santos, N. C., Pont, F., et al. 2006, , 453, L21. doi:10.1051/0004-6361:20065476
[Guillot & Hueso(2006)]2006MNRAS.367L..47G Guillot, T. & Hueso, R. 2006, , 367, L47. doi:10.1111/j.1745-3933.2006.00137.x
[Guillot(2008)]2008PhST..130a4023G Guillot, T. 2008, Physica Scripta Volume T, 130, 014023. doi:10.1088/0031-8949/2008/T130/014023
[Ida et al.(2016)]2016A A...591A..72I Ida, S., Guillot, T., & Morbidelli, A. 2016, , 591, A72. doi:10.1051/0004-6361/201628099
[Kley(1999)]1999MNRAS.303..696K Kley, W. 1999, , 303, 696. doi:10.1046/j.1365-8711.1999.02198.x
[Knierim et al.(2022)]2022A A...658L...7K Knierim, H., Batygin, K., & Bitsch, B. 2022, , 658, L7. doi:10.1051/0004-6361/202142588
[Kruijer et al.(2017)]2017PNAS..114.6712K Kruijer, T. S., Burkhardt, C., Budde, G., et al. 2017, Proceedings of the National Academy of Science, 114, 6712. doi:10.1073/pnas.1704461114
[Lambrechts et al.(2014)]2014A A...572A..35L Lambrechts, M., Johansen, A., & Morbidelli, A. 2014, , 572, A35. doi:10.1051/0004-6361/201423814
[Lambrechts et al.(2019)]2019A A...630A..82L Lambrechts, M., Lega, E., Nelson, R. P., et al. 2019, , 630, A82. doi:10.1051/0004-6361/201834413
[Laughlin et al.(2011)]2011ApJ...729L...7L Laughlin, G., Crismani, M., & Adams, F. C. 2011, , 729, L7. doi:10.1088/2041-8205/729/1/L7
[Lega et al.(2022)]2022A A...658A..32L Lega, E., Morbidelli, A., Nelson, R. P., et al. 2022, , 658, A32. doi:10.1051/0004-6361/202141675
[Lesur(2021)]2021A A...650A..35L Lesur, G. R. J. 2021, , 650, A35. doi:10.1051/0004-6361/202040109
[Levrard et al.(2007)]2007A A...462L...5L Levrard, B., Correia, A. C. M., Chabrier, G., et al. 2007, , 462, L5. doi:10.1051/0004-6361:20066487
[Li & Youdin(2021)]2021ApJ...919..107L Li, R. & Youdin, A. N. 2021, , 919, 107. doi:10.3847/1538-4357/ac0e9f
[Lubow et al.(1999)]1999ApJ...526.1001L Lubow, S. H., Seibert, M., & Artymowicz, P. 1999, , 526, 1001. doi:10.1086/308045
[Masset & Snellgrove(2001)]2001MNRAS.320L..55M Masset, F. & Snellgrove, M. 2001, , 320, L55. doi:10.1046/j.1365-8711.2001.04159.x
[Morbidelli & Crida(2007)]2007Icar..191..158M Morbidelli, A. & Crida, A. 2007, , 191, 158. doi:10.1016/j.icarus.2007.04.001
[Moutou et al.(2013)]2013Icar..226.1625M Moutou, C., Deleuil, M., Guillot, T., et al. 2013, , 226, 1625. doi:10.1016/j.icarus.2013.03.022
[Nakagawa et al.(1986)]1986Icar...67..375N Nakagawa, Y., Sekiya, M., Hayashi, C. 1986. Settling and growth of dust particles in a laminar phase of a low-mass solar nebula. Icarus 67, 375–390. doi:10.1016/0019-1035(86)90121-1
[Riols et al.(2020)]2020A A...639A..95R Riols, A., Lesur, G., & Menard, F. 2020, , 639, A95. doi:10.1051/0004-6361/201937418
[Robert et al.(2018)]2018A A...617A..98R Robert, C. M. T., Crida, A., Lega, E., et al. 2018, , 617, A98. doi:10.1051/0004-6361/201833539
[Schneider & Wurm(2019)]2019ApJ...886L..36S Schneider, N. & Wurm, G. 2019, , 886, L36. doi:10.3847/2041-8213/ab55e0
[Schneider & Bitsch(2021)]2021A A...654A..71S Schneider, A. D. & Bitsch, B. 2021, , 654, A71. doi:10.1051/0004-6361/202039640
[Shibata et al.(2020)]2020A A...633A..33S Shibata, S., Helled, R., & Ikoma, M. 2020, , 633, A33. doi:10.1051/0004-6361/201936700
[Shibata et al.(2022)]2022A A...659A..28S Shibata, S., Helled, R., & Ikoma, M. 2022, , 659, A28. doi:10.1051/0004-6361/202142180
[Stammler et al.(2023)]2023A A...670L...5S Stammler, S. M., Lichtenberg, T., Drażkowska, J., et al. 2023, , 670, L5. doi:10.1051/0004-6361/202245512
[Stevenson(1982)]1982AREPS..10..257S Stevenson, D. J. 1982, Annual Review of Earth and Planetary Sciences, 10, 257. doi:10.1146/annurev.ea.10.050182.001353
[Takeuchi & Lin(2002)]2002ApJ...581.1344T Takeuchi, T. & Lin, D. N. C. 2002, , 581, 1344. doi:10.1086/344437
[Thorngren et al.(2016)]2016ApJ...831...64T Thorngren, D. P., Fortney, J. J., Murray-Clay, R. A., et al. 2016, , 831, 64. doi:10.3847/0004-637X/831/1/64
[Thorngren & Fortney(2018)]2018AJ....155..214T Thorngren, D. P. & Fortney, J. J. 2018, , 155, 214. doi:10.3847/1538-3881/aaba13
[Venturini & Helled(2020)]2020A A...634A..31V Venturini, J. & Helled, R. 2020, , 634, A31. doi:10.1051/0004-6361/201936591
[Weber et al.(2018)]2018ApJ...854..153W Weber, P., Benítez-Llambay, P., Gressel, O., et al. 2018, , 854, 153. doi:10.3847/1538-4357/aaab63
[Youdin & Mitchell(2010)]2010ApJ...721.1113Y Youdin, A. N. & Mitchell, J. L. 2010, , 721, 1113. doi:10.1088/0004-637X/721/2/1113
|
http://arxiv.org/abs/2306.03607v1
|
20230606115009
|
Buying Information for Stochastic Optimization
|
[
"Mingchen Ma",
"Christos Tzamos"
] |
cs.DS
|
[
"cs.DS",
"cs.LG"
] |
plain
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
assumption[theorem]Assumption
remark
remark[theorem]Remark
claimClaim
observationObservation
conjectureConjecture
propertyProperty
exampleExample
informalInformal Theorem
opf
Proof of Observation.
♢
pf
Proof
♢
lpf
Proof of Lemma.
♢
cpf
Proof of Claim.
♢
copf
Proof of Corollary.
♢
Buying Information for Stochastic Optimization
[
Buying Information for Stochastic Optimization
Mingchen Mayyy
Christos Tzamosyyy
yyyDepartment of Computer Sciences, University of Wisconsin-Madison, Madison, WI, USA
Mingchen [email protected]
Machine Learning, ICML
0.3in
]
Stochastic optimization is one of the central problems in Machine Learning and Theoretical Computer Science. In the standard model, the algorithm is given a fixed distribution known in advance. In practice though, one may acquire at a cost extra information to make better decisions.
In this paper, we study how to buy information for stochastic optimization and formulate this question as an online learning problem. Assuming the learner has an oracle for the original optimization problem, we design a 2-competitive deterministic algorithm and a e/(e-1)-competitive randomized algorithm for buying information. We show that this ratio is tight as
the problem is equivalent to a robust generalization of the ski-rental problem, which we call super-martingale stopping.
We also consider an adaptive setting where the learner can choose to buy information after taking some actions for the underlying optimization problem. We focus on the classic optimization problem, Min-Sum Set Cover, where the goal is to quickly find an action that covers a given request drawn from a known distribution. We provide an 8-competitive algorithm running in polynomial time that chooses actions and decides when to buy information about the underlying request.
§ INTRODUCTION
§.§ Offline and Adaptive Stochastic Optimization
Stochastic optimization is one of the core problems in machine learning and theoretical computer sciences. In stochastic optimization, the input parameters of the problems are random variables drawn from a known distribution. Given the distribution of the parameters, a learner constructs a feasible solution in advance (offline stochastic optimization) or adaptively (adaptive stochastic optimization) to optimize the objective function in expectation. Formally, the two types of stochastic optimization problems can be defined in the following way.
Let be a set of scenarios and () be a set of actions. Let ℓ(A,s): 2^×→ R_+ be a loss function. An offline stochastic optimization problem (,,ℓ,) is to find a set of actions A that minimize _s ∼ℓ(A,s), where is a distribution over .
Let be a set of scenarios and () be a set of actions.
Initially, a random scenario s is drawn according to a distribution .
Then, the learner sequentially chooses actions a_1,a_2,… and after the t-th action observes a (possibly randomized) outcome r( (a_1,a_2,…,a_t), s) ∈. The goal of the learner is to take a sequence of actions A that minimizes _s ∼ℓ(A,s) for a given loss function ℓ(A,s), possibly exploiting the information gained about s along the way.
A huge body of work among different communities such as machine learning, theoretical computer science, statistics, and operations research has studied stochastic optimization problems given their numerous applications.
For example, methods of offline stochastic optimization have been widely applied to problems such as training machine learning models <cit.> and mechanism design <cit.>. On the other hand, many adaptive stochastic optimization problems such as Pandora's Box problem <cit.>, active learning <cit.> and optimal decision tree <cit.> have also been applied to areas like artificial intelligence, microeconomics, and operations research.
A common assumption in these works is that the distribution of the scenario s is considered as a given. However, such an assumption is not realistic in practice. A learner in practice has many ways to gain extra knowledge on the optimization problem he is going to solve. With the extra knowledge, it is reasonable that the learner updates the prior distribution to some posterior distribution ' and uses a better strategy to solve the problem.
As a concrete example, consider n bidders that compete over an item in an auction.
Classic auction theory assumes that the auctioneer only knows a prior distribution over the buyer values and wants to design an auction to optimize a target objective such as welfare or revenue. In practice though, there is a number of information sources available to the auctioneer that provide information about the bidders such as their demographics, their preferences or their purchase history. Such information can be very useful.
However, this information does not come for free. It may cost significant amounts of money or time and it is not clear in advance, how helpful this information will be. In the example, the auctioneer may pay an information provider only to receive irrelevant pieces of information or information already known.
§.§ Our Contribution and Techniques
In this paper, we study the problem of buying information for stochastic optimization. We consider a learner that wants to minimize the total cost spent on solving the optimization problem and the cost of acquiring information.
We model the information acquisition process using a signaling scheme <cit.>. A signaling scheme is a (randomized) function f from the set of scenarios to a signal space . If a learner asks for feedback from f, he will receive a signal y and the prior distribution can be updated as |_f(s)=y. In our model, we assume there is a sequence of signaling schemes ={f_t}_t=0^∞ arriving online.
At any timestep t, based on the signals received so far, the learner has the choice to continue purchasing the next signal given by f_t or stop.
Our goal is to construct a learner who is competitive to the cost of a prophet who knows the structure of in advance and can take optimal actions.
For offline optimization, all signals must be purchased before taking any actions in the underlying stochastic optimization problem.
We assume the learner is able to compute an (approximate) optimal solution for the underlying problem given the available information at any point in time. The goal of the learner is then to adaptively decide when to stop buying feedback. Our main results in this setting are summarized below:
There exist a 2-competitive deterministic learner and an e/e-1-competitive randomized learner to buy information for offline stochastic optimizations.
We show that both learners can be implemented efficiently and have competitive ratios that are information theoretically optimal. Thus, we give a comprehensive understanding of buying feedback for offline stochastic optimization.
To solve the problem, we formulate it as a super-martingale stopping problem:
There is an unknown sequence of random variables (X_0, X_1,…) satisfying (X_i+1| X_i) ≤ X_i. The realizations of the random variables arrive online and an algorithm outputs a stopping index i^* adaptively to minimize (i^*+X_i^*). The super-martingale stopping problem can be seen as a generalization of the classic ski-rental problem introduced in <cit.> where all X_i ∈{0,B} and its variant introduced in <cit.>, where (X_0,X_1,…) are monotone decreasing constants. In the more general setting of super-martingale stopping though, the values of X_i may not be monotone, and they are only monotone in expectation. This makes the problem significantly more challenging and as we show in Appendix <ref>, natural algorithms for ski-rental problems are not competitive for our problem.
For adaptive stochastic optimization,
it is also natural to intertwine purchasing information with taking actions. For example, several actions may be taken first in the problem and then information may be purchased conditional on their outcome. As this setting is more problem dependent, we focus on a paradigmatic case of adaptive stochastic optimization, where there is a random set of good actions, and the learner takes actions in each round until a good action is chosen. Such a problem is called Min Sum Set Cover (MSSC), a well-studied adaptive stochastic optimization problem <cit.>.
In our model, the learner has an extra action at each round to buy information getting a better estimate of the probability that an action is good.
We provide an algorithm for this problem competitive to a prophet that knows the sequence of signaling schemes in advance:
There is a poly-time learner that is 8-competitive for buying information for Min Sum Set Cover.
We achieve this in two steps. In the first step, we show we can shrink the action space so that we don't need to consider when to buy feedback. We introduce a simpler model called adaptive stochastic optimization with time dependent feedback, where a learner takes an action in each round, and feedback arrives for free after an action is taken. We show in Theorem <ref> that if there is a learner that is α-competitive for adaptive stochastic optimization with time dependent feedback, then we can use it to construct a 2α-competitive learner to buy information for adaptive stochastic optimization. Our second step is to prove the following technical theorem, which is of independent interest.
The greedy algorithm is 4-competitive for MSSC with time dependent feedback.
There is a lot of work done for analysis of the greedy algorithm of min sum coverage objective under different settings <cit.>. The analysis is usually based on an elegant histogram approach proposed in <cit.>. However, in our model, the decision made by the learner is fully adaptive and it is hard to adapt such an analysis directly. Instead, we bypass such difficulty and use an interesting linear programming dual approach to analyze the greedy algorithm.
Besides algorithmic results, we also present hard instances to build information theoretic lower bound for MSSC under our models.
§.§ Applications of our Model
Buying information is very common in practice. In fact, our model fits well in both theory and practical applications. In this section, we give several applications of our model. We first give a typical example of buying information for offline stochastic optimization.
Selling One Item with Feedback
There is a seller who wants to sell an item to a buyer. The seller sets a price p for the item. The buyer has a value v for the item and would like to pay the price p for the item if p ≤ v. However, if p>v, the buyer will not buy the item. Given a pair of (v,p), denote by P(v,p)=p 1_p ≤ v the payment of the buyer. The value of the buyer may depend on his nationality, education, or other factors. The information can be collected from the historic trade and thus the seller has a prior distribution of the value v. The goal of the learner is to set up the price p to minimize _v ∼(v-P(v,p)). However, instead of setting the price immediately, the seller may pay some money to collect more information about the buyer. This can help the seller update the prior distribution of the value v. In practice, it is hard to predict the quality of the information. The question for the seller is how much information is sufficient for him to set up a good price.
Our second example is on buying information for adaptive stochastic optimization.
Optimal Decision Tree with Feedback
A doctor wants to diagnose the disease of a patient. There are =[n] different tests that can be performed by the doctor and =[m] different possible diseases. If the patient has a disease s ∈ and a test a∈ is performed, then the doctor will receive an outcome r(s,a). The doctor has a prior distribution of the disease s based on the symptom of the patient. In the standard optimal decision tree problem, based on the knowledge of , the goal of the doctor is to perform a sequence of tests adaptively to identify the disease while minimizing the expected cost of the tests.
In practice, the doctor may choose not to run tests but instead send the patient home to see whether the symptoms worsen.
However, this is also costly and it may be challenging to predict what symptoms will appear and how much time it will take for them to appear. Combined with an algorithm for computing approximately optimal decision trees, our work shows how to incorporate the symptom monitoring component to efficiently identify the disease.
Beyond these applications, our model fits well with many existing theoretical frameworks in learning theory. Here we take adaptive submodular optimization, a recently popular research direction in the field of machine learning as our example.
Adaptive Submodularity with Feedback
Motivated by applications on artificial intelligence, <cit.> introduces the notion of adaptive submodularity, which was a popular research topic in the last decade. A function f(A,s) of a set of actions A and a random scenario s is adaptive submodular if _s f(A,s) is a submodular function. After an action a is taken, the learner will see an outcome s(a). Given the distribution of s, the learner will construct the action set A adaptively to optimize classic objectives for submodular functions <cit.> such as submodular maximization, min submodular coverage, and min sum submodular coverage. Many natural questions arise when feedback is involved in this framework. For example, if feedback is costly, how can we buy feedback to help us make adaptive decisions? If the feedback is free and time dependent, are existing policies still competitive?
§.§ Organization of paper
In Section <ref>, we formally introduce the model studied by the paper. In Section <ref>, we introduce the super-martingale stopping problem to study buying information for offline stochastic optimization. We give a tight deterministic algorithm and a tight randomized algorithm for the super-martingale stopping problem. Furthermore, we will discuss the robustness of these algorithms. In Section <ref>, we focus on buying information for adaptive stochastic optimization.
We introduce the model of time dependent feedback and build a connection between adaptive stochastic optimization with time dependent feedback and buying information for adaptive stochastic optimization in Section <ref>. In Section <ref>, we show a simple greedy learner is 4-competitive for Min Sum Set Cover with time dependent feedback. And in Section <ref>, we design an 8-competitive algorithm for buying information for Min Sum Set Cover. Furthermore, we discuss the information theoretic lower bound for Min Sum Set Cover under both settings.
§ STOCHASTIC OPTIMIZATION WITH FEEDBACK
§.§ Feedback Signals for Stochastic Optimization
Let be a set of scenarios with a distribution over and let be a set of random variables over . A randomized signaling scheme f: → is a map from to . Let s be a scenario drawn from . A signal received from f is a realization y ∈ of the random variable f(s).
Similarly, a deterministic signaling scheme f: → is a function from to . When a scenario s is drawn, a signal received from f is defined by y=f(s) ∈. In particular, any deterministic signaling scheme gives a partition of . Given the definition of a signaling scheme, we are able to define feedback for stochastic optimization problems.
Let (,,ℓ,) be a stochastic optimization problem. A sequence of feedback ={f_t}_t=0^∞ is a sequence of unknown randomized (deterministic) signaling scheme. The tth feedback received by a learner is the pair (y_t,|_f_t(s)=y_t, f_t-1(s)=y_y-1,…,f_0(s)=y_0), where y_t is the signal from f_t.
For convenience, we assume f_0 is a constant for every scenario, throughout the paper. Such an assumption is used to reflect the fact that the learner has no extra knowledge at time 0.
In fact, for our model, randomized signaling schemes are equivalent to deterministic ones. We leave a discussion for this in Appendix <ref>.
In this paper, we consider deterministic signaling schemes. A deterministic signaling scheme can simplify our analysis and provide more intuition.
In particular, if each signaling scheme f ∈ is deterministic, then can be represented as a tree. For such feedback , we define a feedback tree T() as follows.
Let feedback ={f_t}_t=0^∞ be a set of deterministic signaling schemes. The feedback tree T() for is a tree that is defined as follows. Each node v ∈ T() contains a set of scenarios and the children of v form a partition of the set of scenarios contained in v. The root of T() contains all scenarios. For every s ∈, let P(s)=(v_0,v_1,…,v_n) be the longest path in T() such that every node in P(s) contains s. Then the set of scenarios contained in v_i is defined by {s' ∈ v_i-1| f_i(s')=f_i(s)}.
§.§ Problem Formulation
Although feedback is helpful for a learner to make better decisions for stochastic optimization problems, obtaining feedback always requires some cost. The cost can be either time or money. Thus, it is natural for a learner to consider how to balance the cost of asking for feedback and the cost of solving the optimization problem. We consider formulating this problem in an online fashion for offline and adaptive stochastic optimization problems.
Let (,_0,ℓ,_0) be an offline stochastic optimization problem and ={f_t}_t=0^∞ be a sequence of unknown feedback. Let ={c_t}_t=0^∞ be a sequence of cost for receiving a signal from f_t+1∈. Here, c_t: →_+ is a nonnegative function that depends on the last received signal. In each time round t ≥ 0, a learner receives an offline stochastic optimization problem (,_t,ℓ,_t) and a cost c_t(y_t) to obtain a signal from f_t+1, where y_t is the signal received from f_t. Here, _t={s ∈_t-1| y_t ∈( f_t(s))} and _t=_t-1|_f_t(s)=y_t for t ≥ 1. The learner can either stop and pay ∑_j=0^t-1c_j(y_j)+min_A ⊆_s ∼_tℓ(A,s) or enter the next time round. An offline stochastic optimization with feedback (,,ℓ,,,) is to decide a stopping time T adaptively to minimize _T ( ∑_j=0^T-1c_j(y_j)+min_A ⊆_s ∼_Tℓ(A,s) ).
Let I=(,,ℓ,,,) be an instance of offline stochastic optimization with feedback, denote by (,I) the cost of the stopping time output by a learner for the given instance.
A learner is α-competitive if for every instance (,,ℓ,,,), (,I) ≤α(I) = αmin_(,I).
We can describe the problem in a more intuitive way in terms of the feedback tree. Let (,,ℓ,) be a stochastic optimization problem and T() be a feedback tree. Each node v of T() represents a new stochastic optimization problem (,_v,ℓ,_v), where _v is the set of scenarios contained in v and _v=|_s ∈_v. Solving this optimization problem needs a cost min_A ⊆_s ∼_vℓ(A,s).
Each node also has a cost c_v to move down for one step. The stochastic optimization problem and the cost will be revealed to the learner when the learner reaches v. T() is unknown to the learner and a path of T() is selected according to initially. The learner will keep moving along the path by paying the cost c_v and will decide when to stop and solve the optimization problem. The benchmark we want to compare is a learner who knows the whole feedback tree in advance and thus can compute the optimal stopping time.
Let (,,ℓ,) be an adaptive stochastic optimization problem. ={f_t}_t=0^∞ be a sequence of unknown feedback. Let ={c_t}_t=0^∞ be a sequence of cost for receiving a signal from f_t+1∈. Here, c_t: →_+ is a nonnegative function that depends on the last received signal.
Initially, a scenario s is drawn according to . In each time round t, a learner first adaptively receives an arbitrary number of signals y(s) from the sequence by paying the corresponding cost, then selects an action a_t ∈. Let T(s) be the number of signals received by the learner if s is drawn. An adaptive stochastic optimization problem with feedback is to make decisions to ask for feedback and take actions adaptively in each time round to minimize _s ∼(ℓ(A,s)+∑_j=0^T(s)-1c_j(y_j(s))).
Let I=(,,ℓ,,,) be an instance of adaptive stochastic optimization with feedback, denote by (,I) the expected cost of the decisions made by a learner for the given instance.
A learner is α-competitive if for every instance (,,ℓ,,,), (,I) ≤α(I) = αmin_(,I).
§ BUYING INFORMATION FOR OFFLINE STOCHASTIC OPTIMIZATION AND SUPER-MARTINGALE STOPPING PROBLEM
Let (,,ℓ,) be an offline stochastic optimization problem and f be a signaling scheme. Denote by _y the posterior distribution of after receiving signal y from f. Although it is possible that min_A⊆_s∼ℓ(A,s)< min_A ⊆_s ∼_yℓ(A,s), it is always true that
_ymin_A⊆_s∼_yℓ(A,s) ≤min_A ⊆_s ∼ℓ(A,s).
That is to say, feedback is always helpful in expectation. This implies the sequence of minimum value of the stochastic optimization problems is a super-martingale. Formally,
given a sequence of feedback , denote by D_i the posterior distribution after receiving signals from f_0,f_1,…,f_i. Let random variable X_i = min_A ⊆_s ∼_iℓ(A,s). Then for every i ≥ 0, we have (X_i+1| X_i) ≤ X_i. This motivates us to formulate the problem of buying information as the following super-martingale stopping problem. As we discuss in Appendix <ref>, super-martingale stopping is equivalent to buying information for stochastic optimization.
§.§ Super-Martingale Stopping Problem
Let X_0,X_1,…,X_n be a sequence of nonnegative random variables unknown to the learner. Assume for every i, (X_i+1| X_i) ≤ X_i. The problem has n+1 rounds. In the ith round, given an observed realization of X_0,…,X_i, a learner decides either to stop and pay i+X_i or to obtain the realization of X_i+1 and go to the next round. The goal of the learner is to compute a decision rule to obtain a stopping time i^* only based on the observed realization of the sequence to minimize (i^*+X_i^*).
For convenience, we assume X_0 is a constant throughout the paper.
Suppose each random variable X_i has finite support, then the sequence can be represented by a tree T, where a node v with depth i stores a realization of X_i. To simplify the notation, we use v to denote both the node and the value stored at the node.
When we make a single movement from node v, we will reach a child v' of v with probability (X_i+1=v'| X_i=v). An optimal learner knows tree T in advance and can decide in advance which node to stop to optimize the expected cost. Formally, a set of stopping nodes S is feasible for T if every path of T with length n contains one and only one stopping node. The cost of S is ∑_v ∈ S(v)( (v)+v). We denote by (T) the minimum cost among all feasible sets of stopping nodes of T. An algorithm is α-competitive if for every instance of the super-martingale stopping problem with a representation T, the expected cost of the algorithm (T)=(i^*+X_i^*) ≤α(T).
In the ski-rental problem studied in <cit.>, there is a pair of positive numbers (B,T) such that X_i=B if i<T and X_i=0 if i ≥ T.
This implies that ski-rental problem is a special case of the super-martingale stopping problem. Thus, we have the following information theoretic lower bound for the super-martingale stopping problem.
For every ϵ>0, no randomized algorithm is e/e-1-ϵ-competitive for the super-martingale stopping problem.
For every ϵ>0, no deterministic algorithm is 2-ϵ-competitive for super-martingale stopping problem.
Recall that the key idea in the design of algorithms for ski-rental problem is to balance the payment X_i and the index i. However, this idea cannot be simply applied to the super-martingale stopping problem.
There are two difficulties faced in the super-martingale stopping problem. First, since any algorithm can only get information from one path of the tree, it is hard to estimate the expected stopping time for the whole tree. Second, unlike most ski-rental type problems, the value X_i is not necessarily decreasing. It is possible that an algorithm moves for one step but sees an X_i with a very large value. We will show in Appendix <ref> that some natural algorithms that work for ski-rental problems are not competitive for the super-martingale stopping problem. On the other hand, in Appendix <ref>, we establish a simple randomized 2-competitive algorithm for the super-martingale stopping problem using a completely novel idea. Although the algorithm we present in Appendix <ref> shows competitive algorithms do exist for super-martingale stopping problem, the competitive ratio doesn't match the information theoretic lower bound in Theorem <ref> and Theorem <ref>.
In the following sections, we will give a tight deterministic algorithm and randomized algorithm for the super-martingale stopping problem. Furthermore, we will also discuss the robustness of these algorithms, when the input is not a super-martingale.
The key idea for designing our algorithms is to maintain the following estimator Q_p(t) throughout the execution of the algorithms.
Let (X_1,…,X_n) be an instance of super-martingale stopping and let T be its tree representation.
Initially, a path p=(v_0,v_1,…,v_n) of T will be drawn randomly according to the joint distribution of (X_0, …, X_n). We define a function v_p(t)=v_i, if t ∈ [i,i+1). Furthermore, we define Q_p(t)=∫_0^t1/v_p(t)dt. In particular, Q_p(t) only depends on our observed realization and doesn't depend on the realization of the random variables we have not seen. We notice that Q_p(t) is strictly increasing with respect to t and thus for every s ≥ 0, we can define its inverse function Q^-1_p(s)=t, where Q_p(t)=s. The power of Q is that it can be used to upper bound and lower bound the optimal stopping time, which can be summarized by the following two lemmas that we will frequently used in our proof. The proof of Lemma <ref> can be found in Appendix <ref> due to a lack of space.
Let T be a tree representation of an instance of the super-martingale stopping problem and let p be a path of T. Then for every s>r>0, Q^-1_p(s)-Q^-1_p(r) = ∫_r^s v_p(Q^-1_p(w))dw.
The proof follows a change of variable. We write w=Q_p(t). Then we have
∫_r^s v_p(Q^-1_p(w))dw = ∫_Q^-1_p(r)^Q^-1_p(s) v_p(t)dQ_p(t)
= ∫_Q^-1_p(r)^Q^-1_p(s)v_p(t)/v_p(t)dt = Q^-1_p(s)-Q^-1_p(r).
Let v ∈ T be a node with depth i and let {p_j}_j=1^k be the set of paths that passes v. For every ρ^* ∈ [Q_p_j(i),Q_p_j(i+1)] and for every ρ≥ρ^*, ∑_j=1^k(p_j)(Q_p_j^-1(ρ)-Q_p_j^-1(ρ^*)) ≤(v)(ρ-ρ^*)v.
§.§ A Tight Deterministic Algorithm for Martingale Stopping
In this section, we propose a simple deterministic 2-competitive algorithm for the super-martingale stopping problem. The competitive ratio is tight according to Theorem <ref>. We leave the proof for Appendix <ref> due to the space limit.
There is a deterministic poly-time algorithm that is 2-competitive for the super-martingale stopping problem.
In particular, if the sequence of random variables is monotone decreasing, then our algorithm can even compete against a prophet who knows the realization of the sequence in advance.
Let I be an instance of the super-martingale stopping problem and (X_0,X_1,…) be the input sequence. Denote by (I) the cost of Algorithm <ref> over instance I. If (X_0,X_1,…) is monotone decreasing, then (I) ≤ 2 min_i (i+X_i).
Let x=(x_0,x_1,…) be a realization of (X_0,X_1,…) and denote by (x) the cost of Algorithm <ref> if the realization is x. Since x is monotone decreasing, we have (x) ≤ 2min_i (i+x_i). Thus,
(I) ≤_x 2min_i (i+x_i) = 2 min_i (i+X_i).
§.§ A Tight Randomized Algorithm for Martingale Stopping
In this section, we extend the idea of Theorem <ref> to obtain a e/e-1-competitive randomized algorithm for the super-martingale stopping problem. Notice that according to Theorem <ref>, the competitive ratio is tight. Recall that in the Algorithm <ref>, we maintain an estimator Q_P(t) throughout the execution of the algorithm and stop when Q_P(t)=1. To obtain a better randomized algorithm, we select a random threshold ρ initially, and stop when Q_P(t) exceeds this threshold. The proof of Theorem <ref> can be found in Appendix <ref>.
There is a randomized poly-time algorithm for the super-martingale stopping problem that is e/e-1-competitive.
Similarly, we have the following corollary, when the input sequence is monotone decreasing.
Let I be an instance of the super-martingale stopping problem and (X_0,X_1,…) be the input sequence. Denote by (I) the cost of Algorithm <ref> over instance I. If (X_0,X_1,…) is monotone decreasing, then (I) ≤e/e-1min_i (i+X_i).
Let x=(x_0,x_1,…) be a realization of (X_0,X_1,…) and denote by (x) the cost of Algorithm <ref> if the realization is x. Since x is monotone decreasing, we have (x) ≤e/e-1min_i (i+x_i). Thus,
(I) ≤_x e/e-1min_i (i+x_i) = e/e-1min_i (i+X_i).
§.§ A Discussion on Benchmark
In this section, we discuss the benchmark of the super-martingale stopping problem. According to Corollary <ref> and Corollary <ref>, if the input sequence is monotone decreasing, then our algorithms can compete with a prophet who knows the realization of the sequence in advance. However, in general, it is not possible to compete against such a strong benchmark, since the gap between the two benchmarks can be arbitrarily large. Thus, it is only reasonable to compete with an algorithm that knows the structure of the feedback in advance. We formalize the discussion as the following theorem, whose proof is in Appendix <ref>.
No algorithm is competitive against min_i (i+X_i) for the super-martingale stopping problem.
§.§ On the Robustness of Algorithm <ref> and Algorithm <ref>
In this part, we consider the robustness of Algorithm <ref> and Algorithm <ref>. Back to our motivation, buying information for offline stochastic optimization. In the model of buying information for offline stochastic optimization, we assume that given a stochastic optimization problem, the learner can solve the problem exactly. However, since most stochastic optimization problems are NP-hard, usually, the learner might only have an α-approximate algorithm to solve it. If X_i is the optimal value of the stochastic optimization problem after receiving the ith feedback, then the cost to solve the problem for the learner is instead X_i, where X_i∈ [X_i,α X_i]. That is to say, if the learner stops at X_i, he will pay i+X_i. We remark that in this case, X_0,…,X_n may not satisfies the super-martingale property anymore, thus we cannot apply the analysis of Algorithm <ref> and Algorithm <ref> directly. However, we will show that the two algorithms are robust under such perturbation. In other words, Algorithm <ref> is 2α-competitive and Algorithm <ref> is e/e-1α-competitive. Formally, we have the following theorem, whose proof is deferred to Appendix <ref>.
Let T be a tree representation of an instance of the super-martingale stopping problem. Let T be any tree constructed by changing the value of every leaf v∈ T by some value v∈ [v,α v]. If we run Algorithm <ref> over T, then (T) ≤ 2α(T) and
if we run Algorithm <ref> over T, then (T) ≤e/e-1α(T).
§ BUYING INFORMATION FOR ADAPTIVE STOCHASTIC OPTIMIZATION AND PROPHET INEQUALITY
Unlike offline stochastic optimization with feedback, buying information for adaptive stochastic optimization is much more problem-dependent. For this reason, we consider designing competitive learners to buy information for specific problems. We choose Min Sum Set Cover, an extreme case of the adaptive stochastic optimization problem as the first problem studied under the feedback setting.
Let =[n] be a set of boxes, each box i contains an unknown number b_i ∈{0,1}. A learner can know b_i by querying box i, i.e. the action space =. A scenario s ∈{0,1}^n is a binary vector that represents the number contained in each box. If scenario s is realized, then for every box i ∈, s_i=b_i. A scenario s is covered if a box i such that s_i=1 is queried.
Let be a set of scenarios and be a probability distribution over . Let f be a sequence of feedback. A scenario s^* is drawn from initially. In each round t, a learner takes an action a_t ∈ to query the box a_t and observes the number contained in that box. Given an instance (,,) of Min Sum Set Cover, the goal of a learner is to construct the sequence of boxes A to query to minimize _s∼ℓ(A,s), where ℓ(A,s) is the number of boxes in A to query until the drawn scenario s is covered.
The main contribution of this section can be broken down into two parts. In the first part, we give a general strategy to shrink the action space of buying information for a broad class of stochastic optimization problems. For such a class of problems, we show that if an α-prophet inequality exists for an adaptive stochastic optimization problem with time dependent feedback, which we will define later, then there is a 2α-competitive learner for buying information for adaptive stochastic optimization. In the second part, using such an idea, we construct an 8-competitive learner to buy information for Min Sum Set Cover(MSSC) by showing a 4-prophet inequality for MSSC with time dependent feedback. Furthermore, we will establish information theoretic lower bounds for MSSC under both settings.
§.§ Time Dependent Feedback and Prophet Inequality
A prophet inequality for an adaptive stochastic optimization is established when a signal arrives from f_t for free in each round. Formally, we have the following model.
Let (,,ℓ,) be an adaptive stochastic optimization problem. ={f_t}_t=0^∞ be a sequence of feedback.
Initially, a scenario s is drawn according to . In each time round t, a learner receives a signal y_t(s) from f_t(s), then takes an action a_t ∈.
An adaptive stochastic optimization problem with time dependent feedback (,,ℓ,,) is to make decisions to construct a sequence of actions A adaptively to minimize _s ∼ℓ(A,s).
If we denote by (,I) be the expected cost of a learner at a given instance I, then a learner is α-competitive if for every instance I, (,I)≤min_(,I). In particular, here we are competing with a learner who knows in advance. We say a stochastic optimization (,,ℓ,) satisfies an α-prophet inequality if there is an α-competitive learner for the corresponding stochastic optimization problem with time dependent feedback.
We have the following theorem to establish the relation between the two problems.
If there is an α-competitive learner for Min Sum Set Cover with Time Dependent Feedback, then there is a 2α-competitive learner for Buying Information for Min Sum Set Cover.
Although the statement of Theorem <ref> is on MSSC here, the same results actually hold for a broader class of problems, where the loss function can be written as a covering function. Due to space limitations, we leave the general statement and the proof of Theorem <ref> for Appendix <ref>.
§.§ Min Sum Set Cover with Time Dependent Feedback
In this part, we establish a 4-prophet inequality for MSSC with time dependent feedback via the following theorem.
Algorithm <ref>, a simple greedy learner is 4-competitive for Min Sum Set Cover with Time Dependent Feedback.
Here we give an overview of our proof, the whole proof is deferred to Appendix <ref>. Our proof is based on a linear programming approach. Assume the feedback is known in advance, then the problem becomes to assign a box for each node of the feedback tree T() to minimize the average number of boxes used to cover the drawn scenario. This problem can be naturally lower bounded by a linear program, and thus every feasible solution to the dual of the linear program gives a lower bound for . We will show that a simple greedy algorithm with no knowledge of T() can be used to construct a feasible solution to the dual program such that the cost of the greedy algorithm is at most a quarter times the dual objective of the solution it constructs.
By Theorem 13 in <cit.>, we know that for every ϵ>0, it is NP-hard to approximate MSSC within a ratio of 4-ϵ. MSSC is a very special case of MSSC with Time Dependent Feedback, thus the result given by Theorem <ref> is tight if we only consider learners that can be implemented in poly-time. However, in the classic MSSC, if we allow a learner to be implemented in super-polynomial time, then we can simply compute the optimal order of box to query using a brute force method. This gives a natural question. Is the knowledge of useful? We show that such knowledge is indeed useful by giving the following information theoretical lower bound for MSSC with Time Dependent Feedback. That is to say, we consider all learners regardless of their running time. We establish the following information theoretic lower bound for MSSC with time dependent feedback. The proof is deferred to Appendix <ref>.
For every ϵ>0, there is no deterministic learner that is 2-ϵ-competitive for Min Sum Set Cover with Time Dependent Feedback.
§.§ Buying Information for Min Sum Set Cover
In the last section, we establish a prophet inequality for MSSC.
In this section, we go back to the original motivation of buying feedback for adaptive stochastic optimization to discuss the upper bound and information theoretic lower bound for MSSC when asking for feedback requires some cost. The model of the problem is given as follows.
According to Theorem <ref> and Theorem <ref>, we can immediately obtain an efficient competitive learner to buy feedback for Min Sum Set Cover, which is described in Algorithm <ref>.
There is a poly-time learner that is 8-competitive for buying information for Min Sum Set Cover.
The main goal of this section is to obtain an information theoretical lower bound for
buying information for MSSC. We establish the information theoretic lower bound via the following theorem, whose proof is in Appendix <ref>.
For every ϵ>0, there is no deterministic algorithm that is 2-ϵ-competitive for buying information for Min Sum Set Cover.
§ ACKNOWLEDGEMENTS
This work was supported by the
NSF Award CCF-2144298 (CAREER).
langley00
icml2023
§ EQUIVALENCE OF RANDOMIZED AND DETERMINISTIC SIGNALING SCHEMES
In our model, it is sufficient to study the case when each signaling scheme is deterministic. In this part, we give a brief discussion on the equivalence of randomized and deterministic signaling schemes.
Given a set of scenarios , a distribution over , and a randomized signaling scheme f. We show we can construct a modified triple (',',f') such that f' is a deterministic signaling scheme and (',',f') is equivalent to (,,f). The triple is constructed in the following way. ' contains multiple copies for each s ∈. ' is a uniform distribution over '. For every s ∈, assume the range of f(s) is {y_1(s),…,y_k(s)} and the set of copies is {Y_1(s),…,Y_k(s)} accordingly. The sizes of the copies are made such that if we draw a scenario according to ', the probability that it is a copy of s is equal to the probability of obtaining s from . Furthermore, if we uniformly draw a copy from
{Y_1(s),…,Y_k(s)} the probability that we obtain a copy from Y_i(s) is equal to the probability that we receive y_i(s) from f(s). In this way, we define f'(s')=y_i(s) if s' ∈ Y_i(s). Thus, we obtain an equivalent triple (',',f') with a deterministic signaling scheme.
§ EQUIVALENCE OF SUPER-MARTINGALE STOPPING AND BUYING INFORMATION FOR OFFLINE STOCHASTIC OPTIMIZATION
In this part, we give a brief discussion on the equivalence of the super-martingale stopping problem and buying information for offline stochastic optimization problems.
We have seen that the super-martingale stopping problem is a special case of buying information for offline stochastic optimization. To see the other direction, it remains to see that given an instance I=(,,ℓ,,,) of buying information for offline stochastic optimization, we can assume each c_t=1 ∈. We give the intuition here via the definition of feedback tree. Let T() be a feedback tree. Assume a learner arrives at a node v of T(), the posterior distribution of the stochastic optimization problem at v is D_v and the cost to move to the next node v' is c_v. Then we can add c_v-1 virtual nodes between v and v' such that the posterior distribution at each node is D_v and the cost to move to the next node is 1. After the modification, we can run any algorithm for the super-martingale stopping problem over the modified instance. We pay c_v to move to v' if and only if we reach v' in the modified instance. In this way, any α-competitive algorithm for the super-martingale stopping problem can be used to construct an α-competitive learner to buy information for offline stochastic optimization problems.
§ NATURAL ALGORITHMS FAIL FOR MARTINGALE STOPPING PROBLEM
In this section, we show some natural algorithms that work for ski-rental problems but fail for the super-martingale stopping problem. According to <cit.>, it is well-known that the following algorithm is 2-competitive for the ski-rental problem.
Algorithm <ref> is not competitive for the super-martingale stopping problem.
We construct a sequence of instance I_n of the super-martingale stopping problem. Denote by (I_n) the cost of Algorithm <ref> over instance I_n and denote by (I_n) the optimal cost of I_n. We will show that (I_n) ≥ H_n (I_n), where H_n is the nth harmonic number.
Let (X_0,…,X_n) be the sequence of random variables for instance I_n. Define X_0=1 to be a constant. For every i≥ 1, X_i can take two possible values. Given X_i-1, X_i=0 with probability 1/i+1 and with probability i/i+1, X_i=i+1/i X_i-1. That is to say, X_i is either 0 or i+1 and X_i=1.
Assume we run Algorithm <ref> over instance I_n. Suppose we just observe X_i. If X_i=0, then we stop and pay i right away. If X_i=i+1, then Algorithm <ref> will keep querying X_i+1. Denote by i^* the random variable of the stopping time of Algorithm <ref>. Then, we have
i^* = ∑_i=1^n i/i+1∏_j=1^i-1j/j+1= ∑_i=1^n 1/i+1 = H_n-1.
On the other hand, we know from the construction of the instance that X_i^*=1, since X_i=1 for every i. Thus the total cost of the algorithm is (I_n) = i^*+X_i^* = H_n. On the other hand, we have (I_n) ≤ 1, since it can simply stop at the beginning. This gives (I_n) ≥ H_n (I_n), which implies that Algorithm <ref> is not competitive.
The reason why Algorithm <ref> fails is that (X_1,…,X_n) might be an increasing sequence, which forces the algorithm to keep querying the next box forever. To avoid keeping querying boxes forever, a natural idea is to change the stopping rule by looking at the smallest value we have seen so far. However, it turns out that such a stopping rule still fails. We consider the following algorithm.
Algorithm <ref> is not competitive for the super-martingale stopping problem.
We construct a sequence of instance I_n of the super-martingale stopping problem. Denote by (I_n) the cost of Algorithm <ref> over instance I_n and denote by (I_n) the optimal cost of I_n. We will show that (I_n) ≥Ω(n) (I_n).
Let (X_0,…,X_n,X_n+1) be the sequence of random variables of instance I_n of the super-martingale stopping problem. Define X_0=n and X_n+1=0. For i ∈ [n], X_i can take two possible values. Given X_i-1, X_i=e^nX_i-1 with probability e^-n and X_i=0 with probability 1-e^-n. That is to say for i ∈ [n], X_n = n. Notice that according to the stopping rule of Algorithm <ref>, X_n+1 will never be queried by the algorithm. Thus, we have (I_n) ≥ X_i =n.
On the other hand, we consider an algorithm that keeps querying X_i+1 if X_i ≠ 0. Denote by i^* the stopping time of this algorithm. We know that X_i^* =0. Furthermore, we have
i^* = ∑_i=1^n+1i(1-e^-n)∏_j=1^i-1e^-n≤ e^-n∑_i=1^n+1i ∈ O(1).
This implies that (I_n) ≤ i^*+X_i^*∈ O(1), while (I_n) ∈Ω(n). Thus, Algorithm <ref> is not competitive.
§ A SIMPLE RANDOMIZED ALGORITHM FOR MARTINGALE STOPPING PROBLEM
In this section, we give a simple randomized 2-competitive algorithm for the super-martingale stopping problem.
Algorithm <ref> is 2-competitive for the super-martingale stopping problem.
Let (X_0,…,X_n) be a sequence of random variables, and let T be the tree representation of the sequence. Denote by (T) the optimal cost of the instance and denote by (T) the cost of Algorithm <ref> over the instance. We prove the theorem using inductions on the number of random variables, which is also the depth of T.
If (T)=0, which means there is only one random variable X_0 in the sequence, the cost of any algorithm is X_0 and the theorem holds trivially. Assuming the theorem holds for any tree with depth n-k, we show the theorem holds for any tree with depth n-k-1. Let T be a tree of an instance of super-martingale stopping problem such that (T)=n-k-1. Let v be the root of T and let v^1,…,v^k be the children of v. Denote by T^i the subtree rooted at v^i. By a dynamic programming approach, we know that
(T) = min{v,1+∑_i=1^k (v^i)(T^i)}.
We consider two cases. In the first case, (T)=v. Without loss of generality, we assume v>1, otherwise, the algorithm will simply stop at v. The cost of Algorithm <ref> is
(T) = 1/vv+(1-1/v)(1+∑_i=1^k(v^i)(T^i))
≤1/vv+(1-1/v)(1+2∑_i=1^k(v^i)(T^i))
≤1/vv+(1-1/v)(1+2∑_i=1^k(v^i)v^i)
≤ 1+1-1/v+2v-2 ≤ 2v.
Here, in the first inequality, we use the induction hypothesis, in the third inequality, we use the super-martingale property.
In the second case, (T) = 1+∑_i=1^k (v^i)(T^i). Similarly, we have
(T) = 1/vv+(1-1/v)(1+∑_i=1^k(v^i)(T^i))
≤1/vv+(1-1/v)(1+2∑_i=1^k(v^i)(T^i))
≤ 2( 1+∑_i=1^k (v^i)(T^i) ).
This shows that for every instance with a tree representation T, (T) ≤ 2(T). This implies Algorithm <ref> is 2-competitive.
§ MISS PROOF IN SECTION <REF>
§.§ Proof of Lemma <ref>
We prove this lemma using induction on the depth of v. If v has a depth of n (v is a leaf), then Lemma <ref> follows directly by Lemma <ref>, since Q_p_j^-1(ρ)-Q_p_j^-1(ρ^*) = ∫_ρ^*^ρvdw=(ρ-ρ^*)v. Assume Lemma <ref> holds for every node v' with depth n-k, we show this for a node v with depth n-k-1. We notice that if ρ≤ Q_p_j(n-k), then this is correct by Lemma <ref>. So in the rest of the proof, we assume ρ > Q_p_j(n-k).
Let {u_j}_j=1^ℓ be the set of children of v and let D(u_j) ⊆{p_j}_j=1^k be the set of paths that passes u_j.
then we have
∑_j=1^k(p_j)(Q_p_j^-1(ρ)-Q_p_j^-1(ρ^*))
= ∑_j=1^k(p_j)(n-k-Q_p_j^-1(ρ^*))+∑_j=1^ℓ∑_p ∈ D(u_j)(p)(Q_p_j^-1(ρ)-(n-k))
≤∑_j=1^k(p_j)(Q_p_j(n-k)-ρ^*)v+∑_j=1^ℓ(u_j)(ρ-(Q_p(n-k)))u_j
= (v)(Q_p(n-k)-ρ^*)v+∑_j=1^ℓ(u_j)(ρ-(Q_p(n-k)))u_j
≤(v)(Q_p(n-k)-ρ^*)v+(v)(ρ-(Q_p(n-k)))v
= (v)(ρ-ρ^*)v.
Here, in the first inequality, we use the assumption of induction and in the second inequality, we use the fact that ∑_j=1^ℓ(u_j | v)u_j ≤ u_j.
§.§ Proof of Theorem <ref>
We show Algorithm <ref> is 2-competitive.
Let T be a tree representation of an instance of the super-martingale stopping problem. We maintain two sets of nodes O and A in the following way. For each path p ⊆ T. We travel down p from the root of T and stop traveling at a node v of p if either v is a stopping node of (T) or it is a stopping node of Algorithm <ref>. In the first case, we add v to O, otherwise, we add it to A. We denote by T the subtree of T with the set of leaves A∪O. Furthermore, let P(v) be the path of T that ends at v ∈O∪A.
Then we have the following lower bound for (T).
(T) ≥∑_v ∈O(v)(v+(v) )+∑_v∈A(v)(v)
To upper bound (T), we will need to establish the following inequality and claim.
Let P be a path such that there is some v ∈ P ∩O, then there must be some stopping node f_P(v) ∈ P of (T) that has v as its ancestor. Let D(v) be the set of paths that passes v. We know from the stopping rule of Algorithm <ref> that for every P ∈ D(v), Q_P((f_P(v))) ≤ 1.
By Lemma <ref>, we have
∑_P ∈ D(v)(P)((f_P(v))-(v))≤∑_P ∈ D(v)(P)(Q_P^-1(1)-(v)) ≤(v)(1-Q_P(v))v ≤(v)v.
Furthermore, we next prove the following claim.
Let T' be a subtree of T with the same root of T. Let S be the set of leaves of T'. For each v∈ S, denote by P(v) the path from the root to v. If every path of T has a node in S, then
∑_v ∈ S(v)vQ_P(v)((v)) ≤∑_v ∈ S(v)(v).
Let v ∈ S be a leave of T'. Assume that (v)=i and P(v)=(v_0,…,v_i). Then
vQ_P(v)(i)=v_i∑_j=0^i-11/v_j.
Now we prove this claim by induction on the depth of T'. If T' has a depth of 0, then the claim holds trivially. Now we assume the claim for any tree with depth k, we show this holds for a tree T' with depth k+1. We remove the nodes with depth k+1 in T' and denote by the remaining tree T. Denote by S the leaves of T and denote by K the set of leaves of T with depth k. For every node v, let N(v) be the set of children of v. Then we have
∑_v ∈ Sp(v)vQ_P(v)((v)) = ∑_v ∈S(v) vQ_P(v)((v))
+ ∑_v ∈ K(v)∑_u ∈ N(v)((u | v)(uQ_P(u)(k+1)-vQ_P(v)(k)))
≤∑_v ∈S(v)(v) + ∑_v ∈ K(v)∑_u ∈ N(v)((u | v)(uQ_P(u)(k+1)-vQ_P(v)(k)))
≤∑_v ∈S(v)(v) + ∑_v ∈ K(v)∑_u ∈ N(v)((u | v)u(Q_P(u)(k+1) - Q_P(v)(k)))
= ∑_v ∈S(v)(v) + ∑_v ∈ K(v) = ∑_v ∈ S(v)(v).
Here, the first inequality follows by our induction, the second inequality follows by the super-martingale property and the second equality follows by (<ref>).
This gives the following upper bound for (T).
(T) ≤∑_v ∈O((v)(v+(v))+ ∑_P ∈ D(v)(P)((f_P(v))-(v))) + ∑_v ∈A(v)(v+(v))
≤∑_v ∈O(v) ((v)+2v) + ∑_v ∈A(v)(v+(v))
≤∑_v ∈O(v) ((v)+2v) + ∑_v ∈A(v)(v(Q_P(v)((v))+1)+(v))
≤∑_v ∈O(v) ((v)+2v) + ∑_v ∈A(v)(v(Q_P(v)((v))+1)+ (v) )
+ ∑_v ∈O(v)(vQ_P(v)((v)) )
≤ 2∑_v ∈O(v)v+2∑_v ∈A(v)v + 2 ∑_v∈A(v)(v)+2∑_v∈A(v)(v)
≤ 2(T).
Here, in the first inequality, we used the super-martingale property of T. In the second inequality, we use (<ref>). In the third inequality, we use the stopping rule of Algorithm <ref>. The second last inequality follows by Claim <ref>.
§.§ Proof of Theorem <ref>
We show Algorithm <ref> is e/(e-1)-competitive.
Let T be a representation of an instance of the super-martingale stopping problem. Let S={v^*_j}_j=1^k be the set of stopping nodes of (T). Let u ∈ S and let P(u) ⊆ T be the path from the root to u.
We notice that we can assume the depth of u is at most Q^-1_P(u)(1). Since the cost of Algorithm <ref> only depends on the value of nodes with depth strictly less than Q^-1_P(u)(1), we can assume every node with a depth larger than Q^-1_P(u)(1) has a value of 0. This assumption doesn't affect the cost of Algorithm <ref> but will force every u∈ S has depth at most ⌈ Q^-1_P(u)(1)⌉. Under this assumption, if a node u has depth exactly ⌈ Q^-1_P(u)(1)⌉, we can furthermore assume the contribution of u to the cost of (T) is Q^-1_P(u)(1). This will only decrease the cost of (T). So in the rest of the proof, every u in S has a depth at most Q^-1_P(u)(1). In particular, this implies for every u ∈ S, there exists some ρ_u∈ [0,1] such that Q_P(u)((u)) = ρ_u. Thus, we can write
(T) = ∑_u ∈ S(u) ( u + Q^-1_P(u)(ρ_u) ).
On the other hand, we can decompose the cost of the algorithm according to S. For every u ∈ S, we define D(u) to be the set of paths in T from the root to a leaf that passes u. Then we can write the cost of the algorithm
(T) ≤∑_u ∈ S(u)∑_p'∈ D(u)(p'| u)∫_0^1(v_p'(Q_p'^-1(ρ))+Q_p'^-1(ρ))p(ρ)dρ,
where we use the fact that when the algorithm stops at time t, the depth of the stopping node is at most t. This implies
(T)-(T) ≤∑_u ∈ S(u) ∫_0^1∑_p'∈ D(u)(p'| u)[(v_p'(Q_p'^-1(ρ))+Q_p'^-1(ρ))-( u + Q^-1_P(u)(ρ_u) )]p(ρ)dρ
= ∑_u ∈ S(u) ∫_0^ρ_u∑_p'∈ D(u)(p'| u)[(v_p'(Q_p'^-1(ρ))+Q_p'^-1(ρ))-( u + Q^-1_P(u)(ρ_u) )]p(ρ)dρ
- ∑_u ∈ S(u) ∫_ρ_u^1∑_p'∈ D(u)(p'| u)[( u + Q^-1_P(u)(ρ_u) )-(v_p'(Q_p'^-1(ρ))+Q_p'^-1(ρ))]p(ρ)dρ
≤∑_u ∈ S(u) ∫_0^ρ_u∑_p'∈ D(u)(p'| u)[(v_p'(Q_p'^-1(ρ))+Q_p'^-1(ρ))-( u + Q^-1_P(u)(ρ_u) )]p(ρ)dρ
+∑_u ∈ S(u) ∫_ρ_u^1∑_p'∈ D(u)(p'| u)( Q_p'^-1(ρ) - Q^-1_P(u)(ρ_u))p(ρ)dρ
≤∑_u ∈ S(u) ∫_0^ρ_u∑_p'∈ D(u)(p'| u)[(v_p'(Q_p'^-1(ρ))+Q_p'^-1(ρ))-( u + Q^-1_P(u)(ρ_u) )]p(ρ)dρ
+∑_u ∈ S(u) ∫_ρ_u^1(ρ-ρ_u)up(ρ)dρ
= ∑_u ∈ S(u) ∫_0^ρ_u(v_P(u)(Q_P(u)^-1(ρ))+Q_P(u)^-1(ρ)- Q^-1_P(u)(ρ_u) )p(ρ)dρ
+ ∑_u ∈ S(u) ( ∫_ρ_u^1(ρ-ρ_u)up(ρ)dρ -∫_0^ρ_uup(ρ)dρ).
Here, the second inequality follows the super-martingale property of the sequence of random variables starting from node u. The second inequality follows by Lemma <ref>.
Recall our goal is to show that (T)-e/e-1(T) = (T)-(T)-1/e-1(T) ≤ 0. For every u ∈ S, we define two functions F_u(s) and G_u(τ) as follows. Let
F_u(s) := ∫_0^s(v_P(u)(Q_P(u)^-1(ρ))+Q_P(u)^-1(ρ)- Q^-1_P(u)(s) )p(ρ)dρ - 1/e-1Q^-1_P(u)(s)
and
G_u(τ) := ∫_ρ_u^1(ρ-ρ_u)τ p(ρ)dρ -∫_0^ρ_uτ p(ρ)dρ - 1/e-1τ.
From our above discussion, we know that
(T)-e/e-1(T) = (T)-(T)-1/e-1(T) ≤∑_u ∈ S(u)(F_u(ρ_u)+G_u(u)).
It is sufficient to show for every u, F_u(s) ≤ 0, ∀ s ∈ [0,ρ_u] and G_u(τ) ≤ 0, ∀τ≥ 0. We first look at F_u(s). Recall the definition of the density function is p(ρ)= e^ρ/e-1. We know from Lemma <ref> that
F_u(s) = ∫_0^s (v_P(u)(Q_P(u)^-1(ρ)) - ∫_ρ^sv_P(u)(Q_P(u)^-1(w)dw )p(ρ)dρ - 1/e-1∫_0^sv_P(u)(Q_P(u)^-1(w)dw
= ∫_0^rv_P(u)(Q_P(u)^-1(w)dwe^r/e-1|_r=0^r=s-∫_0^s∫_0^ρ v_P(u)(Q_P(u)^-1(w)dwp(ρ)dρ
-∫_0^s∫_ρ^sv_P(u)(Q_P(u)^-1(w)dwp(ρ)dρ- 1/e-1∫_0^sv_P(u)(Q_P(u)^-1(w)dw
=e^s/e-1∫_0^sv_P(u)(Q_P(u)^-1(w)dw-∫_0^s∫_0^sv_P(u)(Q_P(u)^-1(w)dwp(ρ)dρ - 1/e-1∫_0^sv_P(u)(Q_P(u)^-1(w)dw
= ( e^s/e-1-e^s-1/e-1-1/e-1)∫_0^sv_P(u)(Q_P(u)^-1(w)dw =0.
Then we look at G_u(τ). We have
dG_u(τ)/dτ = ∫_ρ_u^1(ρ-ρ_u)p(ρ)dρ-∫_0^ρ_up(ρ)dρ-1/e-1
= ρ e^ρ/e-1|_ρ=ρ_u^ρ=1-∫_ρ_u^1p(ρ)dρ -ρ_u(e-e^ρ_u)/e-1-∫_0^ρ_up(ρ)dρ-1/e-1
= -ρ_ue/e-1≤ 0.
This implies that G_u(τ) ≤ G_u(0)=0. Put the above arguments together, we obtain (T) ≤e/e-1(T).
§.§ Proof of Theorem <ref>
Let I be an instance of super-martingale stopping problem. Let (I)=min(i^*+X_i^*) and '= min_i (i+X_i). We will construct a sequence of instance I_N such that (I_N) ≥Ω(N) '(I_N), showing that the gap between the two benchmarks can be arbitrarily large.
Let (X_0,X_1,…) be the sequence of random variables of instance I_N. Define X_0=N. For every i ≥ 1, X_i can take two possible values. Given X_i, X_i+1=e^NX_i with probability e^-N and X_i+1=0 with probability 1-e^-N. That is to say, the sequence of random variables is a super-martingale with a mean equal to N. Thus, the optimal stopping rule is to simply stop at X_0 and (I_N)=N. On the other hand, consider any realization (x_0,x_1,…) of the sequence. We notice from the construction that if x_i=0 then for every j>i, x_j=0. Denote by i' the smallest index such that x_i'=0. Then we have min i+x_i = i' if i' ≤ N and min i+x_i = N if i'>N. Since (i'=i)=(1-e^-N)e^-(i-1)N, we have
'(I_N) = ∑_i=1^N i(1-e^-N)e^-(i-1)N + ∑_i=N+1^∞ N(1-e^-N)e^-(i-1)N∈ O(1).
This implies (I_N) ≥Ω(N) '(I_N).
§.§ Proof of Theorem <ref>
It is sufficient to show that if we run Algorithm <ref> or Algorithm <ref> then (T) ≤α(T). Recall that the only difference between Algorithm <ref> and Algorithm <ref> is that they use different threshold ρ. Algorithm <ref> uses ρ=1 and Algorithm <ref> uses a random threshold.
Let ρ∈ [0,1] be a realization of the random threshold used in Algorithm <ref> and Algorithm <ref>. We denote by _ρ(T) and _ρ(T) the cost of the Algorithm on the corresponding instances with a threshold ρ. In the rest of the proof, we will show _ρ(T) ≤α_ρ(T) for every ρ∈ [0,1]. This will directly imply that (T) ≤α(T).
Since the only difference between T and T is the value of each node, let p=(v_0,v_1,…,v_n) be a path in T, we define v_p(t) = v_i if t ∈ [i,i+1). We can also define Q_p(t) and Q_p^-1(t) in the similar way. Using these notations, we have
_ρ(T) = _p (Q^-1_p(ρ) + v_p(Q^-1_p(ρ)))
= ∫_0^ρ_p v_p(Q^-1_p(w))dw + _p v_p(Q^-1_p(ρ)
≥ρ_p v_p(Q^-1_p(ρ))+_p v_p(Q^-1_p(ρ)).
Here, the second equality follows by Lemma <ref> and the inequality follows by the super-martingale property of T.
On the other hand, for every path p, since for every v∈ T, v≥ v, we know that Q_p^-1(ρ) ≥ Q^-1_p(ρ). If we denote by t'=Q^-1_p(ρ), then this implies there exists some ρ' ≤ρ such that Q_p(t')=ρ'. In particular, since v_p(t) ≤α v_p(t) for every t, it follows that ρ' ≥1/αρ.
Thus, we can write
_ρ(T) = _p ( Q^-1_p(ρ)+Q^-1_p(ρ)-Q^-1_p(ρ') +v_p(Q^-1_p(ρ)) )
= _p Q^-1_p(ρ)+ _p ∫_ρ'^ρv_p(Q^-1_p(w))dw + _pv_p(Q^-1_p(ρ))
≤_p Q^-1_p(ρ) + α (ρ-ρ') _p v_p(Q^-1_p(ρ))+α_p v_p(Q^-1_p(ρ))
≤_p Q^-1_p(ρ) + (α -1)ρ_p v_p(Q^-1_p(ρ))+α_p v_p(Q^-1_p(ρ)).
Here, the equality follows Lemma <ref>, the first inequality follows by the super-martingale property of the T and the second inequality follows by the fact that ρ' ≥1/αρ.
Thus, we obtain
_ρ(T)/_ρ(T)≤_p Q^-1_p(ρ) + (α -1)ρ_p v_p(Q^-1_p(ρ))+α_p v_p(Q^-1_p(ρ))/_p Q^-1_p(ρ) + _p v_p(Q^-1_p(ρ))≤max{α,1+(α-1)ρ_p v_p(Q^-1_p(ρ))/_p Q^-1_p(ρ)}.
Here we use the fact that if a,b,c,d ≥ 0, then a+b/c+d≤max{a/c,b/d}. By Lemma <ref> and the super-martingale property, we know that
_p Q^-1_p(ρ) = ∫_0^ρ_p v_p(Q^-1_p(w))dw ≥ρ_p v_p(Q^-1_p(ρ)).
This implies
1+(α-1)ρ_p v_p(Q^-1_p(ρ))/_p Q^-1_p(ρ)≤ 1+ α-1 = α.
Thus, _ρ(T) ≤α_ρ(T) for every ρ∈ [0,1].
§ MISSING PROOF IN SECTION <REF>
§.§ Proof of Theorem <ref>
As we mentioned in the main body of the paper, Theorem <ref> not only holds for MSSC but also holds for a broader class of stochastic optimization problems.
In this part, we give the general statement and the proof for Theorem <ref>. To begin with, we define a broad class of adaptive stochastic optimization problems for which Theorem <ref> holds.
Let (,,ℓ,) be an adaptive stochastic optimization problem. For every s ∈, we define a family of sets of actions C(s) ⊆ 2^. We say a scenario s is covered if a set of actions A ∈ C(s) are taken. We say the loss function ℓ is a covering loss if for every scenario s ∈ and every sequence of actions a⃗ = (a_1,a_2,…), ℓ(a⃗,s) = min{t |∃ A ∈ C(s), A ⊆{a_1,…,a_t}}, which is the time for a⃗ to cover s.
Many adaptive stochastic optimization problems such as MSSC and optimal decision tree problems have covering objective functions. Next, we give a general statement of Theorem <ref>, which builds a connection between adaptive stochastic optimization with time dependent feedback and buying information for adaptive stochastic optimization.
Let (,,ℓ,) be an adaptive stochastic optimization problem with a covering loss function. If (,,ℓ,) satisfies an α-prophet inequality, then there is a 2α-competitive learner for the adaptive stochastic optimization problem with feedback (,,ℓ,,,).
Denote by I=(,,ℓ,,,) the instance of buying information for stochastic optimization and (I) be the optimal value of the instance. Since (,,ℓ,) satisfies an α-prophet inequality, let be an α-competitive learner for the stochastic optimization problem with feedback. At time round t, denote by R_t the set of outcomes received after taking a set of actions and denote by Y_t a set of signals received from the signaling schemes. Notice that R_t,Y_t are random sets that depend on the random scenarios s. Then a=(R_t,Y_t) is the next action taken by the learner . Based on the notations, we design the following algorithm, which will be shown as 2α-competitive for I=(,,ℓ,,,).
We notice that during the execution of Algorithm <ref>, we count the time round in a different way for convenience. This doesn't affect the final cost of the algorithm. We now decompose the cost of Algorithm <ref> into two parts. Denote by ' Algorithm <ref>. For each scenario s ∈, let (s) be the total cost of ' when s is drawn. We write
(s) = _f(s)+_c(s),
where _f(s) is the feedback cost, the total cost ' spends on buying signals when s is drawn and _c(s) is the coverage cost, the number of actions taken by ' to cover s. That is to say
(',I) = _s ∼(s)= _s ∼_f(s)+ _s ∼_c(s)
≤ 2 _s ∼_c(s),
since the feedback cost is always less than the coverage cost.
In the rest of the proof, we will construct an instance I=(,,ℓ,,') of stochastic optimization with time dependent feedback based on I such that _s ∼_c(s) ≤α(I) and (I) ≤(I). We construct the feedback ' by constructing every possible sequence of signals received from '. Let (y_0,y_1,…) be a sequence of signals received from the signaling schemes (f_0,f_1,…). Let c_t(y_t) be the cost to obtain signal y_t+1 for the sequence. Then for every t ≥ 0, we make c_t(y_t) copies for y_t. Thus, the corresponding signals sequence in ' is (y_0,…,y_0,y_1,…,y_1,…), where y_t appears c_t(y_t) times.
Now we consider Algorithm <ref>. If we ignore the step where we pay c_t to get y_t+1, then the remaining algorithm is exactly running over I=(,,ℓ,,'). Since is an α-competitive learner for I, we know that
_s ∼_c(s) = (,I) ≤α(I).
It remains to show that (I) ≤(I). Consider instance I. Assume s is the drawn scenario and the corresponding sequence of signals is (y_0,y_1,…). Assume the sequence of actions taken in (I) for this sequence of signals is (a_0,a_1,…). We construct a sequence of actions taken for instance I with sequence of signals (y_0,…,y_0,y_1,…,y_1,…) by modifying (a_0,a_1,…). Assume that in (I), the learner pays a cost c to obtain the next signal after taking actions a_i. Then in the modified sequence, we take c arbitrary actions after a_i. For every drawn scenario s, the modified sequences can take less cost to cover s. This implies that (I) ≤(I). Putting things together, we have
(',I) ≤ 2 _s ∼_c(s) ≤ 2α(I) ≤ 2α(I),
which means Algorithm <ref> is 2α-competitive for adaptive stochastic optimization with feedback.
§.§ Proof of Theorem <ref>
Before presenting the proof, we define the model of MSSC with time dependent feedback as a remainder.
Let =[n] be a set of boxes, each box i contains an unknown number b_i ∈{0,1}
A learner can know b_i by querying box i, i.e. the action space =. A scenario s ∈{0,1}^n is a binary vector that represents the number contained in each box. If scenario s is realized, then for every box i ∈, s_i=b_i. A scenario s is covered if a box i such that s_i=1 is queried.
Let be a set of scenarios and be a probability distribution over . Let f be a sequence of feedback. A scenario s^* is drawn from initially. In each round t, a learner receives signal y_t(s^*) from signaling scheme f_t and takes an action a_t ∈ to query the box a_t and observed the number contained in that box. Given an instance (,,,) of Min Sum Set Cover with Time Dependent Feedback, the goal of a learner is to construct the sequence of boxes A to query to minimize _s∼ℓ(A,s), where ℓ(A,s) is the number of boxes to query to cover the drawn scenario s.
As a remainder, we restate Algorithm <ref>, the simple greedy algorithm that we want to analyze here.
Without loss of generality, we can assume is a uniform distribution over and contains only deterministic signaling schemes.
This is because given a distribution , we can modify by making multiple copies of each scenario and uniformly draw a scenario from the modified set of scenarios according to our discussion in Appendix <ref>.
Under this assumption, we will write a linear program to lower bound (I). We say a scenario s is covered by a box i if s_i=1. For every scenario, s, denote by L_s the set of boxes that cover s.
Fix a sequence of feedback , let T() be the feedback tree induced by . Let P(s) be the longest path in T() such that s is contained in every node in P(s).
Any learner will assign a box to each node in T() such that for every scenario s, there is some node v ∈ P(s) such that the box assigned to v by covers s.
We derive the following integer program to capture the cost of a learner. For every node v ∈ T() and for every box i ∈, let x_vi∈{0,1} be the indicator if assigns box i to node v. For every node, v ∈ T() and for every scenario s ∈ v, let y_vs∈{0,1} be the indicator if s is not covered by any box assigned to an ancestor of v. Here, we use the notation v'<v to denote that v' is an ancestor of v. For every scenario s, let (s):= ∑_v ∈ P(s)y_vs, which is the time when s is first covered by an assigned box. Then, any learner gives a feasible solution to the following integer program.
IPmin_x,y ∑_s ∈(s)
∑_i ∈x_vi≤ 1 ∀ v ∈ T()
y_vs+∑_i ∈ L_s∑_v':v'<vx_v'i≥ 1 ∀ v ∈ T(), ∀ s ∈ v
x_vi∈{0,1} ∀ v∈ T(), ∀ i ∈
y_vs∈{0,1} ∀ v ∈ T(), ∀ s ∈.
Here, the first set of constraints implies that for any node v, any learner can assign at most 1 box. The second set of constraints implies that for every node v and every s ∈ v, either s has not been covered so far or there is an ancestor v' of v that is assigned a box i ∈ L_s by learner .
In particular, since is uniform over , (,I) = ∑_s ∈(s). Thus, the following linear programming relaxation gives a natural lower bound for S(I).
LPmin_x,y ∑_s ∈(s)
∑_i ∈x_vi≤ 1 ∀ v ∈ T()
y_vs+∑_i ∈ L_s∑_v':v'<vx_v'i≥ 1 ∀ v ∈ T(), ∀ s ∈ v
x_vi≥ 0 ∀ v∈ T(), ∀ i ∈
y_vs≥ 0 ∀ v ∈ T(), ∀ s ∈ v.
Let {A_v}_v ∈ T() be the set of dual variables for the first set of constraints in (<ref>) and let {B_vs}_v ∈ T(),s ∈ v be the set of dual variables for the second set of constraints in (<ref>). Then we derive the following dual linear program for (<ref>).
DUALmax_A,B ∑_v ∈ T()∑_s ∈ v B_vs - ∑_v ∈ T()A_v
B_vs≤ 1 ∀ v ∈ T()
∑_s:s∈ v, i ∈ L_s∑_v':v<v'B_v's≤ A_v ∀ v ∈ T(), ∀ i ∈
B_vs≥ 0 ∀ v∈ T(), ∀ s ∈ v
A_v≥ 0 ∀ v ∈ T().
Since (<ref>) is feasible and bounded, we know from linear programming dual theory that (<ref>) is feasible, furthermore, (<ref>) and (<ref>) have the same optimal value. Denote by D the optimal value of (<ref>), then we know that S(I) ≥ D.
Next, we will show that there is an optimal solution to (<ref>) that has a special structure. We have the following observations.
Let (A,B) be any feasible solution to (<ref>). For every v ∈ T(), let A'_v = max_i ∑_s: s∈ v, i ∈ L_s∑_v':v<v'B_v's. For every v ∈ T(), s ∈ v, let B'_vs=B_vs. Then (A',B') is feasible to (<ref>), furthermore, (A',B') has a larger objective value than (A,B).
The proof of Observation <ref> follows directly by the second set of constraints in (<ref>).
Let (A,B) be any feasible solution to (<ref>). For every s ∈, let C_s:=∑_v ∈ P(s)B_vs. For every s∈ and for every v ∈ P(s), define
B'_vs = 1 if (v) < ⌊ C_s⌋,
C_s - ⌊ C_s ⌋ if (v) = ⌊ C_s ⌋,
0 otherwise.
For every v ∈ T(), define A'_v = max_i ∑_s: s∈ v, i ∈ L_s∑_v':v<v'B'_v's. Then (A',B') is feasible to (<ref>) and (A',B') has a larger objective value than (A,B).
The feasibility of (A',B') follows by Observation <ref>. Thus, we only need to show (A',B') has a larger objective value. We notice that
∑_v ∈ T()∑_s ∈ v B_vs = ∑_s ∈∑_v ∈ P(s)B_vs = ∑_s ∈C_s = ∑_s ∈∑_v ∈ P(s)B'_vs= ∑_v ∈ T()∑_s ∈ v B'_vs.
It remains to show that ∑_v ∈ T()A_v ≥∑_v ∈ T()A'_v. By Observation <ref>, we may assume A_v = max_i ∑_s: s∈ v, i ∈ L_s∑_v':v<v'B_v's for every v ∈ T(). It is sufficient to show for every v and every s ∈ v, ∑_v' ∈ P(s):v<v'B_v's≥∑_v' ∈ P(s):v<v'B'_v's. We have
∑_v' ∈ P(s):v<v'B_v's = C_s - ∑_v' ∈ P(s):v'≤ vB_v's≥ C_s - ∑_v' ∈ P(s):v'≤ vB_v's = ∑_v' ∈ P(s):v<v'B'_v's.
Observation <ref> and Observation <ref> imply that an optimal solution to (<ref>) can be constructed in the following way. For each s∈, assign C_s ≥ 0 to s. For every s∈ and for every v ∈ P(s), define
B_vs = 1 if (v) < ⌊ C_s⌋,
C_s - ⌊ C_s ⌋ if (v) = ⌊ C_s ⌋,
0 otherwise.
For every v ∈ T(), define A_v = max_i ∑_s: s∈ v, i ∈ L_s∑_v':v<v'B_v's. Let (A,B) be such a solution constructed in the way we discussed above using a vector C ∈ R_+^. Denote by D(C) the objective value of (A,B). Then we have
D(C) = ∑_v ∈ T()∑_s ∈ v B_vs - ∑_v ∈ T()max_i ∈∑_s: s∈ v, i ∈ L_s∑_v':v<v'B_v's
= ∑_s ∈C_s - ∑_v ∈ T()∑_i ∈∑_s: s∈ v, i ∈ L_s∑_v':v<v'B_v's1_vi
= ∑_s ∈C_s - ∑_v ∈ T()∑_i ∈∑_v':v<v'∑_s: s∈ v, i ∈ L_sB_v's1_vi
= ∑_s ∈C_s - ∑_v ∈ T()∑_v':v<v'∑_i ∈∑_s: s∈ v, i ∈ L_sB_v's1_vi
= ∑_s ∈C_s - ∑_v' ∈ T()∑_v:v<v'∑_i ∈∑_s: s∈ v, i ∈ L_sB_v's1_vi,
where 1_vi is the indicator function if i ∈max_j ∑_s: s∈ v, j ∈ L_s∑_v':v<v'B_v's. For convenience, we assume there is only one box that achieves the max.
We interpret D(C) via the following physical process. For each scenario s ∈, we generate a particle 𝐏_s. 𝐏_s moves along the path P(s) with a rate of 1 and stops at time t=C_s. The length of an edge in T() is 1. Let V_s(t) be the speed of 𝐏_s at time t. That is to say C_s = ∫_0^∞ V_s(t) dt for every s ∈.
From this point of view, we can write the first term in D(C) as
∑_s ∈ C_s = ∫_0^∞∑_s ∈V_s(t) dt.
On the other hand, for each node v ∈ T(), there is a box i(v) such that 1_vi(v)=1. At a given time t, we will charge each moving particle G_s(t):={v | v ∈ P(s), (v) ≤ t, i(v) ∈ L_s}. In other words, for every moving particle, we will charge it the number of visited nodes v such that box i(v) covers the corresponding scenario. Next, we build a connection between G_s(t) and D(C). For every s ∈, write P(s) = (v_0,v_1,…,v_n). Then
we have
G_s(t) = ∑_i ∈ L_s∑_v ∈ P(s): (v) ≤ tV_s(t)1_vi = ∑_i ∈ L_s∑_j=0^⌊ t⌋V_s(t)1_v_ji,
which implies
∫_0^∞ G_s(t) dt = ∫_0^∞∑_i ∈ L_s∑_j=0^⌊ t⌋V_s(t)1_v_jidt = ∑_v' ∈ P(s)∑_v: v ≤ v'∑_i ∈ L_sB_v's1_vi,
according to the construction of B. Thus, we can write the second term in D(C) as
∑_v' ∈ T()∑_v:v<v'∑_i ∈∑_s: s∈ v, i ∈ L_sB_v's1_vi≤∫_0^∞∑_s ∈G_s(t)dt.
Combine (<ref>) and (<ref>), we get
D(C) ≥∫_0^∞∑_s∈ V_s(t) - ∑_s ∈G_s(t) dt.
In the rest of the proof, instead of constructing the optimal solution to (<ref>), we will construct a vector C_g based on Algorithm <ref> such that S(I) ≤ 4D(C^g) ≤ 4D ≤ 4S(I), which implies that Algorithm <ref> is 4-competitive.
Consider the implementation of Algorithm <ref>, the greedy algorithm. We notice that if we arrive at some node v ∈ T(), the set of scenarios S_t we received is exactly R_v, the set of scenarios s ∈ v that has not been covered so far. Since is uniform, the box (v) queried by the algorithm at node v is the box that can cover most scenarios in R_v.
Denote by X_v={s ∈ R_v |(v) ∈ L_s}, which is the scenarios in R_v covered by the box that Algorithm <ref> queries in this round. Now we define C_s for each scenario s. Let P=(v_0,v_1,…,v_n) be a path of T() from the root to some leaf. For each v_i ∈ P, define C_v_i=max{C_v_i-1,R_v_i/cX_v_i}, where c>0 is a constant that we will determine later and C_v_0 = R_v_0/cX_v_0.
Notice that {X_v}_v ∈ T() forms a partition of , so each s belongs to a unique X_v. For every node v ∈ T() and for every s ∈ v, we set C_s = C_v. Denote by C^g the vector we just constructed. We next show that S(I) ≤ 4D(C^g).
Notice that
(I) = ∑_v∈ T()((v)+1)X_v = ∑_v∈ T()R_v.
Based on this observation, we first derive the following lower bound for ∫_0^∞∑_s ∈V_s(t) dt. We have
∫_0^∞∑_s ∈V_s(t) dt = ∑_v ∈ T()∑_s ∈ X_v∫_0^∞ V_s(s)dt=∑_v ∈ T()∑_s ∈ X_vC_s ≥1/c∑_v ∈ T()R_v=1/c(I).
Next, we will show that ∫_0^∞∑_s ∈G_s(t)dt ≤1/c∫_0^∞∑_s ∈V_s(t) dt. To do this, we upper bound ∑_s G_s(t) for every t ≥ 0. For every s ∈, let P^t(s)=(v_0(s),v_1(s),…,v_⌈ t ⌉(s)) be the truncation of path P(s) with length of ⌈ t ⌉. We know that for every t, P^t, the set of such truncated paths, forms a partition of . So we can write ∑_s ∈G_s(t) = ∑_P ∈ P^t∑_s ∈ PG_s(t).
Let P=(v_0,…,v_⌈ t⌉) ∈ P^t be such a truncated path. The set of particles that are moving along P corresponds to scenarios in v_⌈ t ⌉ with C_s ≥ t. We observe that along the path P, C_v_i is a step function with respect to the index i. Based on the definition of C_s, for every v and every s ∈ R_v, we have C_s ≥ C_v. This implies that along the path P, there must be some i^* ≤⌊ t ⌋ such that the set of particles that are moving along P at time t corresponds to scenarios exactly in R_v_i^*∩ v_⌈ t⌉. In particular, if we consider the set P^t(i^*) of all paths in P^t that passes v_i^*, then at time t, the set of particles moving along these paths is exactly R_v_i^*.
By the greedy property of Algorithm <ref>, every box can cover at most X_v_i^* scenarios from R_v_i^*. Since each path P ∈ P^t(i^*) contains at most t nodes and each node is charged by at most X_v_i^* moving particles at time t, we have
∑_P ∈ P^t(i^*)∑_s ∈ PG_s(t) ≤ t X_v_i^*≤R_v_i^*/cX_v_i^*X_v_i^* = 1/cR_v_i^* = 1/c∑_P ∈ P^t(i^*)∑_s ∈ PV_s(t).
Here, the second inequality follows by C_v_i^* = R_v_i^*/cX_v_i^*≥ t. The last equality holds because ∑_P ∈ P^t(i^*)∑_s ∈ PV_s(t) is the number of moving particles along paths in P^t(i^*), which is R_v_i^*. Thus we have
∫_0^∞∑_s ∈G_s(t)dt ≤1/c∫_0^∞∑_s ∈V_s(t) dt.
Put the above discussions together, we have
D(C^g) ≥∫_0^∞∑_s∈ V_s(t) - ∑_s ∈G_s(t) dt ≥ (1-1/c)∫_0^∞∑_s ∈V_s(t) dt ≥1/c(1-1/c)(I) = 1/4(I),
by setting c=2 to maximize the ratio. This shows Algorithm <ref> is 4-competitive.
§.§ Proof of Theorem <ref>
We consider the following instance of min sum set cover with time dependent feedback. Let be the set of n boxes. The set of scenarios = {s^i}_i=1^n, where s^i_j=1 if i=j and 0 otherwise. is a uniform distribution over . Let be any deterministic learner. We design a set of feedback _ such that (,I) = n/2-o(1), while there is a learner ' such that (',I)=n/4+o(1). Here, I=(,,,_) and (,I) is the cost of a learner over instance I.
We describe _A via its feedback tree representation T(_). We first fix the structure of T(_), then define the scenario contained in each node of T(_). Let T(_) be a binary tree. Let v be a node in T(_). We denote by L(v) its left child and R(v) its right child. Let {v_i}_i=1^n be a path of T(_) such that v_i+1=R(v_i) and v_1 be the root of T(_). We define the set of scenarios contained in each node in T(). We know that v_1 =. Let _i=(v_i) be the box queried by at node v_i. We define L(v_i)={s__i} and R(v_i) = v_i ∖ L(v_i). This gives the definition of _.
Intuitively, every time queries a box, only tells if it queries the unique box that contains 1. This is to say is useless for and the cost of is
(,I) = 1/n∑_j=1^n j = n-1/2.
On the other hand, let ' be the following learner. Let '(v_i) = _n+1-i, for i ∈ [n] and '(L(v_i))=_i. That is, along the path {v_i}_i=1^n, the order of the queried box by ' is the inverse of that of and at every node L(v_i), ' queries the box corresponding to the unique scenario contained in L(v_i). This implies
(',I) = 1/n+ ∑_j=2^n-1/22j/n = n^2-5/4n.
Thus, we have (,I)/(',I)→ 2, which implies no deterministic learner is 2-ϵ-competitive.
§.§ Proof of Theorem <ref>
Before presenting the proof, we remind the definition of buying information for MSSC.
Let (,,) be an instance of Min Sum Set Cover, ={f_t}_t=0^∞ be a sequence of feedback and
={c_t}_t=0^∞ be a sequence of cost for receiving a signal from f_t+1 from . Initially, a scenario s is drawn from . In each time round t, before s is covered, a learner adaptively receives an arbitrary number of signals from the sequence by paying the corresponding cost and then selects a box to query. An instance (,,,,) of Buying Information for Min Sum Set Cover is to make decisions adaptively to minimize the expected number of the queried box plus the expected cost paid for the feedback to cover the random scenario.
We consider the following instance of buying information for min sum set cover. Let be the set of n boxes. The set of scenarios = {s^i}_i=1^n, where s^i_j=1 if i=j and 0 otherwise. is a uniform distribution over . We assume the cost of obtaining any single feedback is 1.
Let be any deterministic learner. We design a set of feedback _ for .
We describe _A via its feedback tree representation T(_). To do this, we will first fix the structure of T(_), then describe the scenarios contained in each node. The structure of T(_) is defined in the following way. There are n_i+1 nodes in T(_) that have depth of i. Here n_0 =0 and for i ≥ 1, n_i ≥ 0 is a number that depends on . Furthermore, for each level of T(_), only the rightmost node has children. In particular, for i ≥ 1, let v_i+1=R(v_i) be the right most child of v_i, where v_1 is the root of T(_).
We notice that given the structure of T(_), any deterministic learner can be described in the following way using T(_). For every node v ∈ T(_), will query a set of boxes ^v in some order, where ^v≥ 0. Denote by ^i, the set of boxes queried by at node v_i. Let n_i=^i≥ 0, then set of scenarios that contained in R(v_i) is defined by {s^j ∈ v_i | j ∉^i}.
Recall that there are n_i+1 nodes in T(_) that have depth i and we have defined the set of scenarios contained in one of these nodes. For the rest of n_i nodes, we assign a unique scenario covered by ^i to each of them. This gives the definition of _.
In particular, _ is useless for , since every time asks for feedback, the feedback only tells which scenarios are not covered so far.
Now we compute the cost of . Consider the path (v_1,v_2,…,v_k) in T(_), such that ∑_i=1^k^i=n. That is to say, all scenarios are covered before the kth feedback is asked. Notice that (^1,…,^k) forms a partition of . Let s ∈^i be the jth scenario in ^i covered by , then the cost of when scenario s is drawn is
(,s)=∑_ℓ=1^i-1n_ℓ+i-1+j,
which implies
(,I) = _s (,s) = 1/n∑_i=1^k∑_j=1^n_i(∑_ℓ=1^i-1n_ℓ+i-1+j)
= 1/n(∑_i=1^n i + ∑_i=1^k∑_j=1^n_i i-n).
We consider the two different cases. In the first case, ∑_i=1^n i ≤∑_i=1^k∑_j=1^n_i i. We notice that any deterministic learner ^* that asks for no feedback has a cost _s(^*,s)=1/n∑_i=1^n i. This means
(,I)/(^*,I)≥2/n∑_i=1^n i-1/1/n∑_i=1^n i=2-o_n(1).
In the second case, we assume ∑_i=1^n i > ∑_i=1^k∑_j=1^n_i i. In this case, we define a deterministic learner ^* in the following way. ^* keeps asking for feedback until the feedback reveals the drawn scenario, then ^* covers the drawn scenario via the unique box. It is not hard to see, any scenario in ^i will cost ^*, i+1. Thus, _s(^*,s) =1+ 1/n∑_i=1^k∑_j=1^n_i i. In this case, we have
(,I)/(^*,I)≥1/n(∑_i=1^n i + ∑_i=1^k∑_j=1^n_i i)-1/1+ 1/n∑_i=1^k∑_j=1^n_i i= 1/n(∑_i=1^n i + ∑_i=1^k∑_j=1^n_i i)/1/n∑_i=1^k∑_j=1^n_i i-o_n(1) ≥ 2-o_n(1).
Thus, for every ϵ>0, there is no deterministic learner that is 2-ϵ competitive.
|
http://arxiv.org/abs/2306.02542v1
|
20230605022546
|
The influence of supersymmetric quirk particles on the W mass increment and the muon g-2 anomaly
|
[
"Guo-Li Liu"
] |
hep-ph
|
[
"hep-ph"
] |
=100000
=100
footnote
/α0#1Π'_ #1(0)0#1Π_ #1(0)
School of Physics, Zhengzhou University, Zhengzhou 450000, P. R. China
In quirk assisted Standard Model, the couplings between the exotic particles "quirks" and the gauge bosons
may contribute to the W mass and muon g-2 anomaly reported by FermiLab.
We calculate the contributions from supersymmetric quirk particles as an example.
Imposing the theoretical constraints, we found that the CDF II W-boson mass increment
constrains strictly the mixing and coupling parameters and the quirk mass m_F,
while muon g-2 anomaly can not be explained by the exotic particles with their large masses.
The influence of supersymmetric quirk particles on the W mass increment and the muon g-2 anomaly
Guo-Li Liu
July 31, 2023
================================================================================================
=14.9pt
=14.9pt
§ INTRODUCTION
Since it contains the key information of EWSB, precision measurement of W boson mass
can provide a stringent test of the SM, and constrain various new physics models.
Recently, CDF II collaborators at the Fermilab Tevatron collider <cit.>,
using data corresponding to 8.8 fb^-1 of integrated luminosity collected in proton-antiproton
collisions at a 1.96 TeV center-of-mass energy, obtain the new value of W boson mass as
M_W=80,433.5 ± 6.4( stat) ± 6.9 ( syst)=80,433.5± 9.4 MeV/c^2 ,
which is in significant tension with the standard model(SM) expectation which gives <cit.>
M_W=80,357± 4( inputs)± 4 ( theory) MeV/c^2 ,
and the discrepancy is<cit.>[This estimate of discrepancy can be changed by all the variations at the level of 10%. For instance, in Refs<cit.>, the global fit updated central values are M_W^(exp) = 80.413 GeV and M^(SM)_W = 80.350 GeV. However, it can be seen that the anomaly in the W-boson mass is certainly present.]Δ M_W= 70 ± 11 MeV/c^2.
Such deviations, if got confirmed by other experiments, will strongly indicate the existence
of new physics beyond SM <cit.>. So, it is interesting to survey
what is the new constraint that the new CDF II data can impose on the new physics models
in addition to the 125 GeV Higgs.
This W mass increment has great discrepancy from the SM prediction,
which may imply the existence of new physics beyond SM.
Many attempts have been made in new physics framework, in which the anomaly usually attributes
to the deviation of oblique parameters, especially Δ T<cit.>.
On the other hand, with the current
world-averaged result given by <cit.>,
the precision measurement of a_μ=(g-2)/2 has been performed by the E821 experiment at
Brookhaven National Laboratory <cit.>,
a_μ^exp= 116592091(±54)(±33)× 10^-11.
While the Standard Model (SM) prediction from the Particle Data Group gives<cit.>,
a_μ^SM= 116591803(±1)(±42)(±26)× 10^-11,
The difference between theory and experiment shows a 4.2σ discrepancy, hinting at tantalizing new
physics beyond the SM.
In models beyond the SM, there may exist a new confining unbroken non-abelian gauge interaction
<cit.>,
in analogy to quantum chromodynamics (QCD) of strong interaction.
In general, one can assume the new color group confinement scale Λ_X is smaller than the
QCD scale Λ_QCD such that the new color degree
of freedom bears the name infracolor (IC).
We call this infracolor gluon fields igluons, and the fermions quirks[Quirks
can have their scalar partner called squirks, and they can also exist in the
same supermultiplets.].
It was called the quirk model, which is also regarded as a certain limit of QCD with some heavy quarks
called quirks[This particle was also called "thetons"<cit.> or "iquark"<cit.>],
and when QCD gets strong, the scale Λ_X is much smaller than the quirk masses <cit.>
with the light quarks removed from the particle list.
Unlike the real world with light quarks, without worrying about the spontaneous chiral symmetry breaking,
this hypothetical QCD can have drastically different phenomenology.
All kinds of possible new physics beyond the SM have been searched at the Large Hadron Collider
(LHC), after the discovery of the SM Higgs boson.
Solutions to the gauge hierarchy problem of the SM of particle physics
such as supersymmetry and composite Higgs models usually predict a colored top partner
with mass around TeV scale. They have been challenged by the null results of LHC searches
so far. Theories of neutral naturalness <cit.> aim to address the gauge hierarchy problem
without introducing colored states, thus relieve the tension with the LHC searches. This
class of models include folded supersymmetry <cit.>, quirky little Higgs <cit.>,
twin Higgs <cit.>,
minimal neutral naturalness model <cit.> and so on. In those models, some new SU(N) gauge
symmetries are introduced in addition to the SM gauge group.
The quirk particle is charged under both the SM electroweak gauge group and the new confining G_X gauge group and
has mass much larger than the confinement scale (Λ_X) of the G_X.
At colliders, the quirk can only be produced in pairs due to the conserved G_X symmetry.
The interaction between two quirks induced by the G_X gauge bosons, the infracolor force F_s,
will lead to non-conventional signals in the detector.
The manifestation of the quirk signal is strongly dependent on Λ_X due to F_s ∝Λ_X^2<cit.>.
The quirk particle may affect large in view of its electroweak couplings with the gauge bosons and couplings to the leptons,
so we will consider the quirk contributions to the W mass increment and the g-2 anomaly in this paper,
which is organized as follows.
In Sec <ref>, we introduce the supersymmetric quirk particles and the relevant couplings.
In Sec <ref>, we discuss the constraints of the CDF II W boson mass data on the parameters within the quirk models.
In Sec <ref>, the quirk contribution to the g-2 anomaly will be calculated.
Sec <ref> gives our conclusions.
§ THE QUIRK PARTICLES AND THE RELEVANT COUPLINGS
The infracolor dynamics can allow the QQ̅ bound state to survive for
distances of order centimeter and preventing them from annihilation.
In some case, production of the "squirk-antisquirk"
pair at the LHC would quickly lose their excitation energy
by bremsstrahlung and relax to the ground state of the scalar quirkonium<cit.>.
In this work, we consider vector-like quirks with respect to the electroweak gauge group,
together with their scalar partners in the same supermultiplets under the framework of supersymmetry.
The new color group G_X can be SU(2)_X or SO(3)_X or SU(3)_X,
and the new fields are taken to transform in the N=2, 3, or 3 dimensional representations
respectively for these three cases.
Thus the new quirk chiral
supermultiplets containing fermion multiplet D, L, S and their partners D̃, L̃, S̃ transform under G_X × SU(3)_c × SU(2)_L × U(1)_Y<cit.>.
D, L, S and their partners D̃, L̃, S̃ may be assumed to get the masses
the same as superpotential term of the minimum SUSY models(MSSM), μ H_uH_d, where H_u and H_d
are vector-like Higgs chiral supermultiplets in the SUSY models,
with VEVs v_u, v_d ratio tanβ = v_u/v_d and v = √(v_u^2 + v_d^2)≈ 175 GeV.
The non-renormalizable superpotential terms may appear as <cit.>:
W = 1/M_P^2 X X (
λ_μ H_u H_d + λ_D D D + λ_L L L +
λ_S_i S_i
S_i ),
where i=1,…, n_S with n_S SM group singlets in the same representationsof G_X,
and the reduced Planck mass M_P = 2.4 × 10^18 GeV.
The fields X, X will get VEVs roughly of order 10^11 GeV.
The vector-like mass terms in the low-energy effective superpotential can be written as<cit.>
W = μ H_u H_d
+ μ_D D D + μ_L L L + μ_S_i S_i S_i
.
where μ, μ_D, μ_L, μ_S can be in order of 100 GeV to 1 TeV, only if
the corresponding couplings λ_μ, λ_D, λ_L,
λ_S are not too small.
For n_S > 0, the new chiral supermultiplets can have Yukawa couplings
in addition to their mass terms in eq. (<ref>):
W =
k_i H_u L S_i + k'_i H_d L S_i.
On the other side, if there exists the superpotential term such as,
W = λ_ℓS̃Lℓ
with ℓ an MSSM SU(2)_L doublet lepton, we may expect it would have some influence on the muon g-2 discrepancy
between the experiments and the theoretical calculation.
§ THE S, T, U PARAMETERS AND W-MASS INCREMENT
The corrections to various electroweak precision observables can be obtained from the corresponding oblique parameters.
The new physics contributions to the W-boson mass increment can embody in the Peskin's S,T,U oblique parameters <cit.>
by the following<cit.>,
Δ m_W= m_W2(c_W^2-s_W^2)(-12S+c_W^2 T+c_W^2-s_W^24s_W^2 U) ,
with
α S = 4s_w^2 c_w^2
[ 0ZZ
-c_w^2-s_w^2/s_w c_w0Zγ
-0γγ] ,
α T = 0WW/m_w^2 - 0ZZ/m_Z^2 ,
α U = 4s_w^2
[ 0WW - c_w^20ZZ
- 2s_wc_w0Zγ - s_w^20γγ] ,
and ^-1(0)=137.035999084 ,s_W^2=0.23126.
The oblique parameters (S, T, U)<cit.>, which represent radiative corrections to the two-point functions of gauge bosons,
can describe most effects on precision measurements.
As we know, the total size of the new physics sector can be measured by the oblique parameter S,
while the weakisospin breaking can be measured by T parameter.
The new results of S, T, U can be given as <cit.>,
S=0.06± 0.10, T=0.11± 0.12, U=0.14 ± 0.09.
The most important electroweak precision constraints on quirk models comes from the
electroweak oblique parameters S and T <cit.>,
and we will proceed to study the connection between the electroweak precision data with the W mass.
The model can produce main corrections to the masses of gauge bosons
via the self-energy diagrams exchanging the vector-like extra fermions.
With the Yukawa couplings k, k', the new contributions to the Peskin-Takeuchi S,T observables from the new fermions
can be given as<cit.>,
Δ T = N v^4/480 π s_W^2 M_W^2 M_F^2
[13 (k̂^4 + k̂^' 4)
+ 2 (k̂^3 k̂^' + k̂k̂^' 3)
+ 18 k̂^2 k̂^' 2 ],
Δ S = N v^2/30 π M_F^2
[4 k̂^2 + 4 k̂^' 2 -7 k̂k̂^' ].
where k̂ = k sinβ and k̂' = k' cosβ and v ≈
175 GeV,
In our analysis, we will perform a global fit to the predictions of S, T parameters in profiled 1σ favoured regions.
We scan m_F, tanβ, k and k' parameters in the following ranges:
100 GeV≤ m_F ≤ 1100 GeV, 1≤ tanβ≤ 50, 0.01 < k,k' < 1.
In Fig. <ref> and Fig. <ref>, we show the W mass increment varies with the ratio tanβ, Fermion massws m_F, and k, k', which are in the range of (1-50), (100-1100 GeV), and ( 0.01-1), respectively, with the fixed parameters shown in the figures.
From the two figures, we can see that the W mass increment decreases monotonously with increasing m_F, tanβ, while increases monotonously with increasing N, k, k'.
The dependence of N and m_F of the W mass increment is obvious.
But when tanβ gets larger, the influence will be smaller and smaller,
That is because, from Eq.(<ref>) and (<ref>), the ratio tanβ
appears in Δ S and Δ T in the form of sinβ and cosβ, the former of which is approaching to the maximum value 1 and the latter, to the minimum value 0, with the larger and larger tanβ values.
We can also see that the effect from the couplings k and k' is not
equivalent in Fig. <ref> and contribution range from k can span from negative to positive 10^-2 GeV, while that from k', just in 10^-3 GeV. The reason of the in insensitivity of k' is that in the oblique parameters Eq.(<ref>) and (<ref>), k' is always multiplied by cosβ, which is too small when tanβ=30 we choose, just as shown in Fig. <ref> and Fig. <ref>.
So it is necessary to consider the contributions in the whole parameter space.
By the way, in most parameter space, the W mass increment is in the experimental limit, so we may try to constrain the parameters according to the bound of the W mass discrepancy between the experiments and the SM prediction in Eq.(<ref>).
The above constraints on the W increment mass from the parameters N, m_F, tanβ, k, k' are obtained independently.
In Fig. <ref> we consider the joint effect by scanning the allowed points possible to exist for the mass increment in the 1σ range of the experimental bound. We set 10000 scanned random points, and 6149 points meet the constraints in the scanning.
Note that we have fixed N=2 in Fig. <ref> since it does not exert much influence on the results.
From the first diagram of Fig. <ref>, we can see that
there are almost no constraints on m_F, and it can be any value selected in the scanning.
That is because in our parameter spaces, within the whole possible value of the fermion mass m_F,
it can be seen from Eqs. (<ref>), (<ref>), the coefficient of Δ T is much larger than that of Δ S and the contributions of the following terms of the twos are of the same size, so the contribution of Δ T is primary.
On the other hand, the coefficient is much smaller than the subsequent terms.
Then from Eq. (<ref>) we see that the factor that the coefficient of Δ T is larger than that of Δ S, ensures that they together contribute positively to Δ m_W, and the coefficients are smaller than the subsequent terms, while m_F is a part of the coefficients. Therefore,
as long as the result is positive, m_F hardly affects Δ m_W, that is to say, m_F is not restrained by the CDF data.
From the first diagram of Fig. <ref>, we can also see that the contributions from k and k' are not the same, just same as the discussed above, and k is bound as k>0.4, while k' ranges the whole space. The reason for the insensitivity of k' is that k' is always multiplied by the factor cosβ, which is small with large tanβ, which starts from 1.
The third diagram of Fig. <ref> shows that the constraints on tanβ is also quite weak, which can also be seen in the right diagram of Fig.<ref>,
that is because since k and sinβ, k' and cosβ appear together, and the relation sin^2β+cos^2β=1 will finally decrease the contribution with
increasing tanβ.
Thus we can conclude that in the most of the parameter space, the parameters in supersymmetric quirk models can account for the CDF data of the W mass increment,
and only the constraints on k is obvious, k≳ 0.4.
§ THE G-2 ANOMALY OF THE NEW COUPLINGS
In quirk models, the muon g-2 contributions are mainly obtained via the
one-loop diagrams induced by the couplings shown in Eq.(<ref>) as in Fig.<ref>.
Note that two-loop Barr-Zee diagrams disappear since there is no mixing between SM gauge bosons of the new
quirk particles <cit.> and the m_μ^2/m_S^2 suppression of the diagram containing such two couplings<cit.>.
The one-loop contribution can be written as<cit.>:
Δ a_μ^( 1-loop)_H^(b) =
λ_ℓ^2 m_μ^2/16π^2 ∫_0^1 dx x^3-x^2/m_S^2 x+ m_L^2(1-x),
where λ_ℓ^2 is the coupling shown in Eq.(<ref>), on which the one-loop moment magnetic can be realized as Fig.<ref>.
Since S̃(L) is the scalar(fermion) component of the supermultiplet, their masses should be
in the same level as that of the fermion F, so we take them changing also in the range of 100-1100 GeV.
We scan coupling λ_ℓ^2 from 0 to 1.
Fig.<ref> shows that the contributions of scalar and fermion from the supermultiplet
at one-loop level, and we find that the contribution is quite small,
about ∼ 10^-10, which can not explain the the discrepancy
between the experiments and the theoretical prediction.
The situation is not surprising, since it has been pointed out that only if the
scalar masses are very small, such as several GeV, the contribution may be large<cit.>,
but our choice for the new particle masses is larger than 100 GeV.
Hence, at the one-loop level, it is difficult to fill the gap between the experiments and the theoretical prediction in the supersymmetric quirk models.
Therefore, with the missing two-loop Barr-Zee diagram, we can conclude that
the supersymmetric quirk models can not account for the muon g-2 anomaly.
§ CONCLUSIONS
In this paper, we firstly show the W mass increment varies with the parameters tanβ, m_F, and k, k' with different new color group representations N,
and we find the dependence on the parameters of the W mass increment is obvious.
We then scan the allowed points possible to exist for the mass increment
in the 1σ range of the experimental bound
and find that there are almost no constraints on m_F,
and the contributions from k and k' are not the same, with k>0.4, while k' ranges the whole space.
and the constraints on tanβ is also quite weak.
In a word, in the most of the parameter space,
supersymmetric quirk models can account for the CDF data of the W mass increment,
and only the constraints on k is obvious, k≳ 0.4.
We also calculate the contribution to the muon g-2 anomaly at the one-loop level,
and find that it is difficult to account for the gap between the experiments and
the theoretical prediction in the supersymmetric quirk models.
§ ACKNOWLEDGMENT
This work was supported by the National Natural Science Foundation of China(NSFC)
under grant 12075213,
by the Fundamental Research Cultivation Fund for Young Teachers of Zhengzhou University(JC202041040)
and the Academic Improvement Project of Zhengzhou University.
99CDF:W CDF Collaboration et al., Science 376, 170-176 (2022).
SM:W P. A. Zyla et al., Prog. Theor. Exp. Phys. 2020, 083C01 (2020).
2205.12237S. Afonin, W-boson mass anomaly as a manifestation of spontaneously broken additional SU(2) global symmetry on a new fundamental scale, Universe 8 (2022) 627, arXiv:2205.12237.
2204.04204J. de Blas, M. Pierini, L. Reina, L. Silvestrini,
Impact of the Recent Measurements of the Top-Quark and W-Boson Masses on Electroweak Precision Fits, Phys. Rev. Lett. 129 (2022) 27, 271801, arXiv:2204.04204.
Sci376-22-6589
T. Aaltonen, et al. [CDF Collaboration], Science 376 (2022) 6589.
Athron:muong-2 P. Athron, C. Balázs, D. H. J. Jacob, W. Kotlarski, D. Stöckinger and H. Stöckinger-Kim, JHEP 09 (2021), 080
doi:10.1007/JHEP09(2021)080
[arXiv:2104.03691 [hep-ph]].
anomaly:W E. Bagnaschi, M. Chakraborti, S. Heinemeyer, I. Saha and G. Weiglein,
[arXiv:2203.15710 [hep-ph]].
STU M.E. Peskin, T. Takeuchi, Phys. Rev. Lett. 65 (1990) 964;
M.E. Peskin, T. Takeuchi, Phys. Rev. D 46 (1992) 381.
WAR-g-2 C. Patrignani et al. (Particle Data Group), Chin. Phys.
C40, 100001 (2016).
BNL-g-2 G. W. Bennett et al. (Muon g-2), Phys. Rev. D73,
072003 (2006), arXiv:hep-ex/0602035.
Okun-1980[1] L. B. Okun, JETP Lett. 31, 144 (1980) [Pisma Zh. Eksp. Teor. Fiz. 31, 156 (1979)]; Nucl.
Phys. B 173, 1 (1980).
Bjorken-1979[2] J. D. Bjorken, SLAC-PUB-2372 (1979), in Quantum Chromodynamics, proceedings of the
SLAC Summer Institute on Particle Physics, Stanford, California, 1979, edited by Anne
Mosher (SLAC, Stanford, 1980).
Gupta-Quinn-1982[3] S. Gupta and H. R. Quinn, Phys. Rev. D 25, 838 (1982).
0805.4642[4] J. Kang and M. A. Luty, arXiv:0805.4642 [hep-ph].
0604261 M. J. Strassler and K. M. Zurek, Phys. Lett. B 651, 374 (2007), [arXiv:hep-ph/0604261].
0810.1524
Kingman Cheung, Wai-Yee Keung, Tzu-Chiang Yuan, Nucl.Phys.B 811, (2009) 274, arXiv: 0810.1524.
1509.04284[1] D. Curtin and P. Saraswat,
Phys. Rev. D93, (2016), no. 5 055044, [arXiv:1509.04284].
0609152[2] G. Burdman, Z. Chacko, H.-S. Goh, and R. Harnik,
JHEP 02 (2007) 009, [hep-ph/0609152].
0805.4667[3] G. Burdman, Z. Chacko, H.-S. Goh, R. Harnik, and C. A. Krenke,
Phys. Rev. D78 (2008) 075028, [arXiv:0805.4667].
0812.0843[4] H. Cai, H.-C. Cheng, and J. Terning,
JHEP 05 (2009) 045, [arXiv:0812.0843].
0506256[5] Z. Chacko, H.-S. Goh, and R. Harnik,
Phys. Rev. Lett. 96 (2006) 231802, [hep-ph/0506256].
1501.05310[6] N. Craig, A. Katz, M. Strassler, and R. Sundrum,
JHEP 07 (2015) 105, [arXiv:1501.05310].
1905.02203[7] J. Serra, S. Stelzl, R. Torre, and A. Weiler,
JHEP 10 (2019) 060, [arXiv:1905.02203].
1810.01882[8] L.-X. Xu, J.-H. Yu, and S.-H. Zhu,
Phys. Rev. D 101, 095014 (2020), arXiv:1810.01882.
2002.07503Jinmian Li, Tianjun Li, Junle Pei, Wenxing Zhang,
Euro. Phys. J. C 80, 651 (2020), arXiv:2002.07503.
1012.2072Stephen P. Martin,
Phys. Rev. D 83, (2011) 035019, arXiv: 1012.2072.
Kim:1983dt J.E. Kim and H.P. Nilles,
Phys. Lett. B 138, 150 (1984).
Murayama:1992dj H. Murayama, H. Suzuki and T. Yanagida, Phys. Lett. B 291, 418 (1992).
STU1 W.J. Marciano, J.L. Rosner, Phys. Rev. Lett. 65 (1990) 2963;
W.J. Marciano, J.L. Rosner, Phys. Rev. Lett. 68 (1992) 898, Erratum.
STU2 G. Altarelli, R. Barbieri, Phys. Lett. B 253 (1991) 161.
Spheno W. Porod, Comput. Phys. Commun. 153 (2003) 275 [arXiv:hep-ph/0301101];
W. Porod and F. Staub, Comput. Phys. Commun. 183 (2012) 2458 [arXiv:1104.1573].
W:STU R. Boughezal, J.B. Tausk, J.J. van der Bij, Nuclear Physics B 725 (2005) 3-14.
2204.03796C.-T. Lu, L. Wu, Y. Wu, B. Zhu, arXiv:2204.03796.
top-bottom-seesaw H.-C. Cheng, B. A. Dobrescu, J. Gu, JHEP08(2014)095, arXiv: 1311.5928;
C. Balazs, T. Li, F. Wang and J. M. Yang, JHEP 1301, 186 (2013), arXiv:1208.3767.
1502.04199 Victor Ilisie, New Barr-Zee contributions to (g-2)mu in two-Higgs-doublet models,
JHEP 04, (2015) 077, arXiv:1502.04199.
1loop-deltaamu
J. P. Leveille, Nucl. Phys. B 137, 63 (1978);
S. R. Moore, K. Whisnant, and Bing-Lin Young, Phys. Rev. D 31, (1985) 105;
Farinaldo S. Queiroz, William Shepherd, Phys. Rev. D 89 (2014) 095024, arXiv:1403.2309.
g-2-th Guo-Li Liu,Ping Zhou,
The Contribution of Charged Bosons with Right-Handed Neutrinos to the Muon g-2 Anomaly in the Twin Higgs Models,
Universe 8 (2022) 12, 654, arXiv:2101.00607.
Ellwange-fwangSee e.g,
Domingo, F.; Ellwanger, U. Constraints from the Muon g-2 on the Parameter Space of the
NMSSM. J. High Energy Phys. 2008, 2008, 79.
https://doi.org/10.1088/1126-6708/2008/07/079
Explaining the Muon g-2 Anomaly in Deflected AMSB for NMSSM, Li-Jun Jia, Zhuang Li, Fei Wang,
Universe 2023, 9, 214, arXiv:2305.04623.
|
http://arxiv.org/abs/2306.10765v1
|
20230619081514
|
Path to Medical AGI: Unify Domain-specific Medical LLMs with the Lowest Cost
|
[
"Juexiao Zhou",
"Xiuying Chen",
"Xin Gao"
] |
cs.AI
|
[
"cs.AI",
"cs.CL",
"cs.CV"
] |
Medical artificial general intelligence (AGI) is an emerging field that aims to develop systems specifically designed for medical applications that possess the ability to understand, learn, and apply knowledge across a wide range of tasks and domains.
Large language models (LLMs) represent a significant step towards AGI.
However, training cross-domain LLMs in the medical field poses significant challenges primarily attributed to the requirement of collecting data from diverse domains.
This task becomes particularly difficult due to privacy restrictions and the scarcity of publicly available medical datasets.
Here, we propose Medical AGI (MedAGI), a paradigm to unify domain-specific medical LLMs with the lowest cost, and suggest a possible path to achieve medical AGI. With an increasing number of domain-specific professional multimodal LLMs in the medical field being developed, MedAGI is designed to automatically select appropriate medical models by analyzing users' questions with our novel adaptive expert selection algorithm.
It offers a unified approach to existing LLMs in the medical field, eliminating the need for retraining regardless of the introduction of new models.
This characteristic renders it a future-proof solution in the dynamically advancing medical domain.
To showcase the resilience of MedAGI, we conducted an evaluation across three distinct medical domains: dermatology diagnosis, X-ray diagnosis, and analysis of pathology pictures.
The results demonstrated that MedAGI exhibited remarkable versatility and scalability, delivering exceptional performance across diverse domains.
Our code is publicly available to facilitate further research at <https://github.com/JoshuaChou2018/MedAGI>.
Healthcare, Deep learning, Large language model, Artificial general intelligence
Path to Medical AGI: Unify Domain-specific Medical LLMs with the Lowest Cost
Juexiao Zhou^1,2,#, Xiuying Chen^1,2,#, Xin Gao^1,2,*
^1Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Kingdom of Saudi Arabia
^2Computational Bioscience Research Center, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Kingdom of Saudi Arabia
^#These authors contributed equally.
^*Corresponding author. e-mail: [email protected]
July 31, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Artificial General Intelligence (AGI)<cit.> refers to highly autonomous systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks and domains.
These systems are designed to match or even exceed human competencies in intellectual tasks.
Essentially, AGI represents the pinnacle objective within the field of artificial intelligence (AI)<cit.>.
Within this realm, Medical AGI is an emerging field that aims to develop Artificial General Intelligence systems specifically designed for medical applications, encompassing tasks such as disease diagnosis, treatment planning, and patient care optimization.
Large language models (LLMs)<cit.> represent a significant step towards AGI by showcasing the power of language processing and understanding.
During the past few months, significant progress has been made in the field of LLMs, revolutionizing language comprehension and enabling complex linguistic tasks<cit.>.
Among the highly anticipated models, ChatGPT, developed by OpenAI, has garnered attention for its exceptional capabilities.
This model is especially proficient in generating human-like text based on the input it receives, demonstrating an impressive understanding of nuanced contexts and varied linguistic styles.
Specifically, ChatGPT shows great potential in Medical AGI by assisting with medical disease diagnosis through patients conversations, such as ophthalmic diagnosis<cit.>, pathology diagnosis<cit.>, and health care discussion<cit.>.
One limitation of ChatGPT is its exclusive reliance on text input, with no support for direct image input.
This absence of multimodal capabilities narrows its applicability in medical diagnosis, a field that often depends significantly on image-based data.
<cit.> tries to solve this problem in ChatCAD by integrating multiple image-text networks to transform medical imaging data, including X-rays, CT scans, and MRIs to textual description.
These transformed descriptions can subsequently be used as input for ChatGPT.
However, the separation of the image-to-text transformation process from the LLMs process not only underutilizes the full potential of LLMs but can also lead to compromised performance if the quality of the image-to-text model is lacking.
Furthermore, it is crucial to address the potential data privacy concerns associated with ChatGPT's API for uploading text descriptions, as both medical images and textual patient information are highly sensitive<cit.>.
To ensure the protection of patient confidentiality, careful consideration should be given to implementing robust privacy protection measures <cit.>.
To solve the above two challenges, a number of open-source multimodal LLMs were proposed<cit.>.
In the medical field, there are two main approaches being adopted.
The first involves training an end-to-end large multimodal model that combines a vision encoder and an LLM for visual and language understanding, such as LLaVA-Med<cit.> and PathAsst<cit.>.
This strategy often faces challenges due to the need to gather data from various domains, which is especially challenging in medicine due to privacy issues and the lack of open-source datasets.
The second approach seeks to bridge the gap between LLMs and pre-trained image encoders using an additional alignment layer, which is then fine-tuned using domain-specific data.
This method, as employed in models such as SkinGPT-4<cit.>, ProteinChat<cit.>, XrayGPT<cit.> and XrayChat<cit.>, is more feasible due to the requirement of fewer instances to fine-tune fewer parameters.
It's optimistic to envision that, in the future, an increasing number of domain-specific professional multimodal LLMs in the medical field will be developed.
However, having them dispersed across various platforms, each with their own instructions, and leaving it up to users to find the model that fits their specific needs could be quite costly.
It is also costly in terms of storage and loading resources to repeatedly store and load the same image encoder and language models for different multimodal LLMs.
Merging these models to form a universal medical model by using all the collected data is also unrealistic, given that medical data is typically non-public and not shared.
As an alternative, integrating these models into a unified platform could prove to be a powerful solution.
Hence, in this work, we propose Medical AGI (MedAGI), a paradigm to unify domain-specific medical LLMs with the lowest cost, and suggest a possible path to achieve medical AGI.
Specifically, our MedAGI system is designed to automatically select appropriate medical models by analyzing users' questions. This selection process leverages the detailed descriptions of different medical models provided in their respective introductions, ensuring the best fit for the user's requirements.
In addition to saving space and being user-friendly, our model also boasts extendability.
It doesn't require retraining, regardless of the number of new models proposed, making it a future-proof solution in the rapidly evolving field of medical AI.
To demonstrate the robustness of MedAGI, we evaluated it in three medical domains, including dermatology diagnosis, X-ray diagnosis, and analysis of pathology pictures.
Our experiments revealed that MedAGI is proficient at selecting the appropriate models to match various user requirements.
In conclusion, MedAGI stands out as a versatile and easily scalable solution.
As the community continues to develop and increase the number of alignment layers trained on various domains, MedAGI only needs to manage these alignment layers to deliver a domain-generic performance, making it a promising tool for the future.
§ RESULTS
§.§ Design of MedAGI
MedAGI is a paradigm to unify domain-specific medical LLMs with the lowest cost and a possible path to achieving medical AGI (Figure <ref>).
By taking the user-uploaded image and user question as inputs, the system is capable of answering questions pertaining to different domains, including dermatology, X-ray analysis, and pathology, regarding the provided image.
Concretely, the uploaded image is first processed by the Vision Transformer (VIT)<cit.> and Q-Transformer models<cit.> for comprehensive understanding.
The VIT model partitions the image into smaller patches and extracts crucial features.
The Q-Transformer model then generates an embedding of the image by leveraging a transformer-based architecture, enabling the model to consider the image's contextual information.
Then MedAGI leverages the detailed descriptions of different medical models provided in their respective introductions stored in the database and selects the adaptive expert alignment layer in the domain-specific model that matches the user's intention the most.
The layer is then used to align the visual representation from Q-Transformer with the user question, enabling a coherent analysis of the image.
Finally, the LLM utilizes the aligned information to generate a text-based diagnosis, providing a clear and concise description of the image corresponding to the user's question.
Thus, MedAGI achieves AGI-like capability for medical diagnosis purposes, where users no longer need to care about which domain their input image belongs to.
§.§ MedAGI Automatically Selects the Most Suitable Expert Layer
In the absence of MedAGI, users are required to manually select the appropriate multimodal LLMs based on the specific image type and the manner in which they pose their questions. For example, they might have to choose between SkinGPT-4 for dermatology diagnosis, XrayChat for X-ray analysis, or PathologyChat for pathology image analysis, which adds complexity and necessitates domain-specific considerations.
Herein, MedAGI offers a unified interface that eliminates the need for users to worry about the specific domain to which a particular task belongs. It provides a seamless experience by integrating various domain-specific models into a single framework. To illustrate this, we conducted a comparative study involving MedAGI, SkinGPT-4, XrayChat, PathologyChat, and MiniGPT-4 across three domains, with three cases per domain, as depicted in Figure <ref>-<ref>.
As expected, SkinGPT-4, XrayChat, and PathologyChat performed well within their respective domains. SkinGPT-4 excelled in dermatology diagnosis, XrayChat showed proficiency in X-ray analysis, and PathologyChat demonstrated effectiveness in pathology image analysis. However, when faced with cross-domain scenarios, these domain-specific models exhibited limitations due to their lack of cross-domain knowledge and expertise.
In contrast, MedAGI proved capable of providing accurate and appropriate answers to user queries, even in cross-domain situations. This highlights MedAGI's domain-agnostic nature and its ability to handle a wide range of medical tasks, transcending specific domains.
§.§ Scalability of MedAGI
The scalability of MedAGI extends beyond the domains of dermatology diagnosis, X-ray analysis, and pathology image analysis. MedAGI's design allows for easy extension to a wide range of medical domains, making it a scalable solution for various healthcare applications. For instance, MedAGI could be seamlessly applied to domains such as cardiology, neurology, radiology, oncology, and many others. By leveraging domain-specific medical LLMs and incorporating them into the MedAGI framework, the system could analyze and interpret data from diverse medical specialities. This scalability enables healthcare professionals to access MedAGI's domain-generic capabilities across a broader spectrum of medical disciplines.
§ METHODS
§.§ Data processing and model training
To demonstrate the robustness of MedAGI, we evaluated it in three medical domains, including dermatology diagnosis, X-ray diagnosis, and analysis of pathology pictures by implementing SkinGPT-4, XrayChat, and PathologyChat with MiniGPT-4 as the backbone, and gathering the expert alignment layer from them.
To implement SkinGPT-4, we followed the procedures demonstrated in <cit.> and used two public datasets (SKINCON<cit.> and Dermnet) and a private in-house dataset, where the public datasets were used for the step 1 training, and the second public dataset and our in-house dataset were used for the step 2 training.
To implement XrayChat, we followed the procedures demonstrated in <cit.> and used 400K chest X-ray images and instructions, from Open-i and MIMIC CXR<cit.>.
To implement PathologyChat, we collected 262,777 patches extracted from 991 H&E-stained gastric slides with Adenocarcinoma subtypes paired with captions extracted from medical reports<cit.>.
During the training of both steps, the max number of epochs was fixed to 5, the iteration of each epoch was set to 5000, the warmup step was set to 5000, batch size was set to 2, the learning rate was set to 1e-4, and max text length was set to 160. The entire fine-tuning process required approximately 9 hours to complete and utilized two NVIDIA V100 (32GB) GPUs. The training was conducted on a workstation equipped with 252 GB RAM, 112 CPU cores, and two NVIDIA V100 GPUs.
§.§ Algorithm for Adaptive Selection of Expert Alignment Layers
Our expert alignment layer selection considers both the user question and different model instructions.
The model description is derived from the abstract of the corresponding paper.
Formally, we represent the user input as q={w^q_1, ⋯, w^q_L_q}, where w^q_i is the i-th word, and L_q is the input length.
Similarly, the j-th model description is denoted as d={w^d,j_1,⋯,w^d,j_L_d}.
We employ a BERT model pre-trained on 215M question-answering pairs from diverse sources<cit.> to encode each word sequence:
{𝐡^q_i, ⋯, 𝐡^q_L_q} =Enc(w^q_1, ⋯, w^q_L_q),
{𝐡^d,j_i, ⋯, 𝐡^d,j_L_d} =Enc(w^d,j_1, ⋯, w^d,j_L_d).
where Enc is the encoder module in BERT which outputs the vector representation 𝐡^q_i of each input token w^q_1 in user input and 𝐡^d,j_i of each input token w^d,j_1 in the j-th model description.
To obtain a vector representation of the user input, we apply the mean-pooling operation to the hidden states of tokens:
u= Mean-pooling ({𝐡^d_i, ⋯, 𝐡^d_L_d}).
The j-th model description is obtained as similarly v^j.
At inference, when predicting similarities between the two inputs, only the sentence embeddings u and v^j are used in combination with cosine-similarity:
s^j=similarity(u,v^j).
The model that obtains the highest s score will be selected as the answering model.
The descriptions of SkinGPT-4, XrayChat, and PathologyChat in MedAGI were set as below:
SkinGPT-4: SkinGPT is a revolutionary dermatology diagnostic system that utilizes an advanced vision-based large language model to assess skin conditions. By uploading personal skin photos to the system, users receive an autonomous analysis that can identify and categorize various skin conditions, and provide treatment recommendations.
XrayChat: XrayChat is a cutting-edge system that enables interactive, multi-turn conversations about chest X-ray images. Users simply upload a chest X-ray image, ask any question about it, and XrayChat generates informed responses. The system utilizes an X-ray encoder, a large language model, and an adaptor to comprehend the X-ray image and produce accurate and helpful answers.
PathologyChat: PathologyChat is a cutting-edge system that enables interactive, multi-round conversations about stained pathology images. Users simply upload a pathology image, ask any question about it, and PathologyChat generates informed responses.
§ CONCLUSION AND DISCUSSION
With the increasing number of domain-specific professional multimodal LLMs in the medical field, combining these models into a unified platform could prove to be a meaningful task. MedAGI is one of the possible solutions to unify domain-specific medical LLMs with the lowest cost.
In conclusion, MedAGI presents a promising paradigm for unifying domain-specific medical large language models (LLMs). By automatically selecting appropriate medical models based on users' questions, MedAGI eliminates the need for users to navigate multiple platforms and instructions, reducing costs and improving user experience. MedAGI represents a significant step towards the realization of medical artificial general intelligence. Its unified approach, scalability, and adaptability make it a compelling solution for the future of medical AI.
§ ACKNOWLEDGEMENTS
Funding: Juexiao Zhou, Xiuying Chen, and Xin Gao were supported in part by grants from the Office of Research Administration (ORA) at King Abdullah University of Science and Technology (KAUST) under award number FCC/1/1976-44-01, FCC/1/1976-45-01, REI/1/5202-01-01, REI/1/5234-01-01, REI/1/4940-01-01, RGC/3/4816-01-01, and REI/1/0018-01-01.
Competing Interests: The authors have declared no competing interests.
Author Contribution Statements: J.Z., X.C. and X.G. conceived of the presented idea. J.Z. and X.C. designed the computational framework and analysed the data. X.G. supervised the findings of this work. J.Z., X.C. and X.G. took the lead in writing the manuscript. All authors discussed the results and contributed to the final manuscript.
Data availability: The data for pathology can be accessed at <https://github.com/masatsuneki/histopathology-image-caption>. The data for XrayChat can be accessed at <https://github.com/UCSD-AI4H/xraychat>. The SKINCON dataset can be accessed at <https://skincon-dataset.github.io/>. The Dermnet dataset can be accessed at <https://www.kaggle.com/datasets/shubhamgoel27/dermnet>. The restricted in-house skin disease images of SkinGPT-4 are not publicly available due to restrictions in the data-sharing agreement.
Code availability: The code proposed by MedAGI is publicly available at <https://github.com/JoshuaChou2018/MedAGI>.
IEEEtran
|
http://arxiv.org/abs/2306.08419v1
|
20230614103137
|
Mediated Multi-Agent Reinforcement Learning
|
[
"Dmitry Ivanov",
"Ilya Zisman",
"Kirill Chernyshev"
] |
cs.MA
|
[
"cs.MA",
"cs.GT",
"cs.LG"
] |
fancy
Greeks' pitfalls for the COS method in the Laplace model
Tobias Behrens^∗, Gero JunikeCorresponding author: Gero Junike. Carl von Ossietzky Universitt,
Institut fr Mathematik, 26129 Oldenburg, Germany. E-mails: [email protected],
[email protected]
April 2023
=======================================================================================================================================================================================================
§ INTRODUCTION
The cooperation of self-interested agents is an elusive concept to define and measure, especially in temporally and spatially extended environments typical for Multi-Agent Reinforcement Learning (MARL). In these environments, agents may have multiple low-level actions available that may not be inherently cooperative or competitive. The aggregated effect of these actions affects the rewards of all agents and the state of the environment in ways that may not be easy to disentangle. This is further complicated by games lasting for multiple turns, over which the effect of agents' actions accumulates. An approach alluring in its simplicity is to measure cooperation as social welfare, i.e., some aggregate (usually, the sum total) of cumulative rewards of all agents <cit.>. This allows complete freedom in the choice of training procedures, including arbitrary reward manipulations, information sharing, and parameter sharing (examples are provided in Section <ref>).
In this paper, we challenge this view. While maximizing social welfare is a relevant problem of its own, not every solution should count as a solution to cooperation. Unlike fully cooperative MARL settings where all agents share a common objective, mixed (i.e., general-sum) settings imply self-interested agents with clear boundaries. Blending their interests and blurring their boundaries does not make the agents more cooperative – it makes the setting itself more cooperative. Instead, cooperation is only meaningful if it is a consequence of the rational decision-making of strategic agents that act in their own best interests. This is only possible if the agents act in an equilibrium, i.e., a convention that none of the agents can deviate from and increase their individual reward. Defining cooperation as converging to socially beneficial equilibrium (like tit-for-tat) is sometimes referred to as conditional cooperation, as opposed to unconditional cooperation that focuses on maximizing social welfare <cit.>. As we discuss later, very few existing approaches address conditional cooperation.
Designing generic MARL algorithms that promote or create socially beneficial equilibria is a complex and largely unsolved problem. Prospective solutions can come from the field of mechanism design (and related fields, such as information design and contract design) <cit.>. This field studies how to implement trusted entities known as mechanisms that interact with self-interested agents in ways that both consider their incentives and achieve desirable social outcomes. Taking auction design as an example, the agents' incentives can be valuations over an item and a desirable social outcome can be allocating the item to the agent with the highest valuation. From the economic perspective, unconditional cooperation is unrealistic since one cannot arbitrarily modify the rewards or incentives of an economic agent the way one may be able with an artificial agent. In our example, the designer cannot simply ask agents to disclose their valuations and give away the item for free to the highest number because the agents would have no incentives to report truthfully. An example more relevant to MARL is self-driving vehicles <cit.>: while we can encode the rewards of the vehicles, we cannot encode the rewards of the people that use them. Instead, desirable social outcomes should be aligned with agents' incentives – a property known as Incentive-Compatibility (IC). A vast literature on designing IC mechanisms is waiting to be adapted to mixed MARL. This direction has recently gotten attention in several papers that we discuss in Section <ref>.
In this paper, we show how conditional cooperation in MARL can be solved via mediators <cit.>. A mediator is a benevolent entity that can act on behalf of the agents that agree to it. In a mediated augmentation of a game, each agent first decides whether act on its own or to let the mediator act for it, an action that we call `commit'. We call the set of committed agents a `coalition'. Then, the original game is played between the mediator acting for the coalition and the rest of the agents. Note that the mediator can have a unique policy for each coalition. Crucially, an agent always has the opportunity to refuse to commit (serving as an opportunity to misreport), e.g., if it finds the mediator incapable of achieving an acceptable reward or if it wants to exploit other agents cooperating through the mediator. The potential impact of mediators is illustrated in Table <ref> on the example of the prisoner's dilemma.
We are interested in applying mediators to complex sequential games where optimal policies are unknown in advance and have to be learned. Specifically, we propose to train both agents and the mediator jointly with MARL. This creates challenges atypical for unconditional cooperation: not only the mediator has to learn a policy that maximizes social welfare (for each coalition), but it also has to encourage agents to commit, i.e., ensure compatibility with their incentives. We show how to formulate this as a constrained optimization problem, as well as how to solve this problem using the method of Lagrange multipliers and dual gradient descent.
Additionally, we introduce the frequency of agents' commitment as a hyperparameter that we denote as commitment window k. When k=1, agents' commitment lasts for a single time step. When k>1, agents are only allowed to commit each k-th time step and the commitment lasts for k time steps (i.e., the commitment is more committing). As a result, the mediator has a higher potential to maximize social welfare for larger k.
There are multiple advantages to our approach. First, any emergent behaviour is by design an equilibrium because agents act in their own best interests (i.e., maximize their own rewards) and can deviate if they find it beneficial. Second, <cit.> show that in symmetric games there always exists a mediator with a socially optimal strategy such that committing is an equilibrium for the agents. This means that there is no loss in achievable social welfare when using mediators compared to unconditionally maximizing social welfare. For asymmetric games, we empirically demonstrate the power of mediators. Third, the mediator's strategy is fair in the sense that each committed agent receives the expected payoffs that are at least as high as if it acted on its own. This is unlike the existing techniques that artificially encode fairness into a centralized objective <cit.>. Fourth, the mediator only promotes cooperation between the committed agents and deters the free-riding of the rest. These are the properties of reciprocity that naturally emerge from the Incentive-Compatibility of our solution, which is unlike the existing attempts at training reciprocal agents based on arbitrary reward sharing and heuristics <cit.>. Fifth, the rate of commitment to the mediator presents a generic measure of cooperation.
We experimentally validate our procedure to train agents with the mediator in several variants of prisoner's dilemma <cit.> and public good game <cit.>, including one-shot and sequential games. For each game, we first analyze the. We find that naively training the mediator to maximize social welfare may result in agents refusing to commit, either due to misalignment of their and the mediator's interests or due to them trying to exploit other committed agents. We show that this can be addressed by considering agents' incentives through additional constraints when training the mediator. Additionally, we investigate the effect of varying the commitment window k and find that higher k gives the mediator more power.
§.§ Related Work
<cit.> distinguish two kinds of cooperation in MARL settings. Unconditional cooperation refers to cooperation independent of what the opponents are doing. Reciprocity-based cooperation refers to cooperation iff others cooperate, e.g., tit-for-tat in the iterated prisoner's dilemma.
We use a similar but more general classification of unconditional and conditional cooperation that better reflects the literature. We define the problem of unconditional cooperation as the maximization of social welfare. The problem of conditional cooperation additionally has a condition that no agent has incentives to deviate, i.e., that the agents are in equilibrium.
Unconditional cooperation.
The majority of unconditional cooperation techniques are based on modifying or replacing agents' rewards with `intrinsic' preferences <cit.>. The intrinsic preferences can be either rewards of other agents or rewards learned by agents or a third party to guide agents to social welfare maximizing outcomes. The most direct approach is to train each agent to directly optimize social welfare, but this is susceptible to issues like free-riding and credit assignment that can be addressed by exploiting reward decomposition available in mixed environments. These techniques hardwire other-regarding preferences into agents and are typically not concerned with equilibria from the perspective of maximizing the original rewards.
The few alternative techniques typically require parameter or information sharing. An example of the former is the parameterization of all agents with identical neural networks <cit.>. This technique sidesteps the conflict of interests by hardwiring reciprocity into agents: trying to exploit automatically reflects. An example of the latter is the social influence that uses a communication channel to maximize impact on message recipients <cit.>. This technique ignores the potential incentives of agents to manipulate each other through communication channels.
While these techniques are typically framed as solutions to cooperation, the problem they solve is more akin to the fully-decentralized MARL <cit.>. In this setting, agents can have varying reward functions, but the collective goal is to maximize the globally averaged return over all agents, i.e., the social welfare. The agent-specific reward functions serve as an instrument to address credit assignment, so it indeed makes sense to not be concerned with equilibria. In contrast, mixed MARL implies selfish interests, and while social welfare is undoubtedly a useful performance measure, treating its unconditional maximization as a unanimously shared goal is a misleading shortcut.
Conditional cooperation.
Some approaches to conditional cooperation look for existing equilibria with high social welfare. Learning with Opponent-Learning Awareness (LOLA) and its modifications <cit.> leverage alternative gradient updates that shape the opponent's learning to guide it to cooperative equilibria. As a result, it can learn reciprocal strategies, e.g., tit-for-tat in the repeated prisoner's dilemma. LOLA has multiple limitations: it is only applicable to two-player games, requires access to the transition dynamics, and assumes that the opponent learns using first-order gradient-based methods. Furthermore, LOLA updates require read access to the opponent's parameters. While this can be circumvented by learning the opponent's model based on their behaviour, it comes at the expense of performance.
Other approaches change the rules of the game such that self-interested agents prefer to cooperate, i.e., such that outcomes with high social welfare become equilibria. At this point, the line between unconditional and conditional cooperation may become blurry. After all, modifying an agent's rewards with intrinsic preferences could be reinterpreted as paying the agent extrinsically by a third party. However, once additional rewards are considered extrinsic payments, a natural question is what is the minimal payment scheme such that agents still cooperate. None of the papers described in the unconditional cooperation section ask this question.
The question of optimal payments is central to the economic field of contract design <cit.>. Adapting contract design to MARL is attempted in the concurrent work of <cit.>. This work proposes for one of the agents to take the role of the principal that may pay other agents. However, their empirical algorithm assumes that the payment condition is pre-determined and only learns the payment amount. Designing a generic algorithm that can learn optimal payment schemes (both conditions and amounts) by either one of the agents or a third party is an open problem.
Other solutions to conditional cooperation can be considered reward redistribution <cit.>, which can also be framed as taxation <cit.>; and similarity-based cooperation <cit.>, which is an extension of program equilibrium <cit.>.
In recent years, there has been a rising interest in communication under competition in MARL <cit.>. In contrast to communication in fully cooperative environments, mixed environments may incentivize self-interested agents to manipulate others through their messages, preventing reliable and mutually beneficial communication from being established. A principled way to resolve this issue could potentially come from the field of Bayesian Persuasion <cit.>. There have been extensions of Bayesian Persuasion to online multi-receiver settings <cit.>, as well as to MARL <cit.>.
§ PROBLEM SETUP
§.§ Markov Games
Markov game is a standard formalization of spatially and temporally extended environments typical for MARL <cit.>. It is defined as a tuple 𝒢 = (S, N, (𝒜_i)_i ∈ N, (𝒪_i)_i ∈ N, T, (r_i)_i ∈ N). Let S be the set of all possible states s, N be the set of agents, 𝒜_i be the set of actions a_i available to the agent i in all states. Let 𝒪_i: S → O_i be the observation function, where O_i is the set of observations o_i of agent i. Let T: S × (𝒜_i)_i ∈ N→Δ(S) be the transition function, where Δ denotes a set of discrete probability distributions. This function specifies the effect of the agents' actions on the state of the environment. We enumerate the sequences of sampled transitions with time-steps t. Let r_i: S × (𝒜_j)_j ∈ N→𝒫(ℛ) be the reward function for each agent i, where 𝒫 is a set of continuous probability distributions, ℛ⊆ℝ. Let r̃_i, t∼ r_i(s_t, a_t) denote sampled rewards.
Let R̃_i, t = ∞l=t∑[ γ^l-tr̃_i, l] be the discounted cumulative reward a.k.a. the return, where γ∈ [0, 1) denotes the discount factor. Let π_i: O_i →Δ(𝒜_i) be the policy of agent i. Let V_i(o_i) = 𝔼_π, r_i, T, p(s |π, 𝒪_i(s) = o_i)[R̃_i] be the value function. The agent seeks the policy π_i that maximizes the value V_i in each observation.
§.§ Mediators
A mediator can be viewed as an additional entity that may act in the game on behalf of a subset of agents. The agents interact with the mediator by optionally sending it messages, and the mediator acts for those agents that sent it a message. Crucially, an agent may refrain from sending a message and act independently from the mediator. Formally, <cit.> define mediator as a tuple ℳ=((M_i)_i ∈ N, c = (c_C)_∅≠ C ⊆ N), where M_i is a finite set of messages that agent i may send to the mediator, C is a subset of agents that sent messages to the mediator referred to as coalition, and c_C: M_C →Δ((𝒜_i)_i ∈ C) is the correlated strategy (the joint policy) for the coalition C. Each agent has a utility function over the outcomes, the expectation of which it rationally maximizes.
A special case that we focus on is the minimal mediator. A mediator is called minimal if each message space M_i is a singleton, meaning that agents' interaction with the mediator is limited to agreeing to enter the coalition. We refer to this action as committing. To uniquely define a minimal mediator, specifying M_i becomes unnecessary. A strategy of the mediator that makes a unanimous commitment an equilibrium is called mediated equilibrium. Crucially, <cit.> show that any mediated equilibrium can be implemented by a minimal mediator, as well as that mediated equilibrium that maximizes social welfare always exists in symmetric games.
Note that an agent cannot misreport its commitment to the mediator, i.e., deviate while pretending to commit. Instead, the action of refusing to commit itself serves as misreporting. A weaker variant of a mediator that only recommends actions is also explored in the economic literature <cit.>. Applying this idea to MARL could be an interesting new direction.
It is also important to note that <cit.> primarily explore mediators through the lens of strong mediated equilibria, i.e., equilibria robust to deviations of groups of agents. Since the number of different groups grows exponentially with the number of agents, finding strong mediated equilibria with RL is a challenging problem. We leave it to future work.
§.§ Markov Mediators
The approach of <cit.> implies fixed strategies of the mediator. Instead, we treat the mediator as a separate agent with its own goals and train it with RL alongside other agents. To this end, we introduce Markov mediators.
Let ℳ=((𝒪_C^M)_∅≠ C ⊆ N, (r_C^M)_∅≠ C ⊆ N, k) be minimal Markov mediator. Let 𝒪_C^M: S → O_C^M be the mediator's observation function for coalition C, where O_C^M is the set of the mediator's observations. As the observations, we will use a tuple o_C^M = ((o_i)_i ∈ C, C). Note that alternative choices of the mediator's observations like o_C^M = ((o_i)_i ∈ N, C) or o_C^M = (s, C) create an asymmetry of information between the mediator and the agents, which may serve as an additional incentive for the agents to commit. However, this would require access to additional information during execution.
Let r_C^M: S × (𝒜_i)_i ∈ N→𝒫(ℛ) be the mediator's reward function for coalition C. We are only concerned with mediators with the goal of increasing the utilitarian social welfare of the agents. For this reason, as the mediator's reward we will use the sum of rewards of the agents in the coalition: r̃_C, t^M = ∑_i ∈ Cr̃_i, t. Let k ∈ℤ^+ be the commitment window. Each k steps of the game, an agent may either commit to the mediator or choose to play independently for the next k steps. Agents can only commit when t is divisible by k.
We define the policy π_C^M: O_C^M →Δ((𝒜_i)_i ∈ C), the return R̃_C, t^M = ∞l=t∑[ γ^l-tr̃_C, l^M ] = ∑_i ∈ CR̃_i, t, and the value V_C^M(o_C^M) = 𝔼[R̃_C^M] = ∑_i ∈ C𝔼 [R̃_i] of the mediator for each coalition C similarly to those of the agents. Notice that the return and the value of the mediator decompose into the respective sums over the coalition.
§ ALGORITHM
We now discuss how to train the Markov mediator with RL. We first describe our practical implementations of agents and the mediator, including neural architectures and loss functions, and then dive deeper into potential objectives for the mediator. Note that we write all expressions as expectations, but in practice, these are approximated as empirical averages over sampled transitions.
§.§ Practical Implementations
Both the agents and the mediator are trained via Actor-Critic frameworks <cit.>. The actor represents the policy π(o), whereas the critic represents an approximation of the value function Ṽ(o). Both actor and critic can be parameterized with neural networks <cit.>. The architectures are illustrated in Figures <ref> and <ref> and are described below.
Agents
The agents are trained independently: an agent i has its own actor π_θ_i(o_i) and critic Ṽ_ϕ_i(o_i), respectively parameterized by θ_i and ϕ_i, that are trained only on its own experience. The respective loss functions L are the negated policy gradient for the actor and the squared temporal difference for the critic:
L(θ_i) = - 𝔼 (r̃_i, t + γṼ_ϕ_i(o_i, t+1) - Ṽ_ϕ_i(o_i, t)) logπ_θ_i(o_i, t)
L(ϕ_i) = 𝔼 (r̃_i, t + γṼ_ϕ_i(o_i, t+1) - Ṽ_ϕ_i(o_i, t))^2
MARL literature routinely leverages parameter and experience sharing for both actor <cit.> and critic <cit.>, but we opt out of these techniques to ensure that agents are selfish and individual, and therefore that any cooperation observed is rational rather than a consequence of hardwired reciprocity (as discussed earlier).
We apply minimal changes to adapt agents to the presence of the mediator. When k=1, the only change is that each agent's actor is augmented with an additional action, i.e., to commit to the mediator. The effect of this action is that the mediator takes control over the agent for the current time-step.
When k > 1, the mediator acts for the agent for k steps at a time, and the commitment action is only available to the agents every k time-steps. This necessitates several additional changes. First, the current time-step t is concatenated to an agent's observation to let it know whether it can commit. Second, in the succeeding k-1 states after an agent commits, it effectively acts off-policy and therefore is not trained. Third, committing for k steps effectively transitions the agent from s_t directly to s_t+k, and in the process yields the total discounted reward of ∑_l=t^t+k-1γ^l-tr̃_i, l. For this reason, (only) when an agent commits, the 1-step temporal difference in the loss functions (<ref>) and (<ref>) is replaced with the k-step temporal difference: (r̃_i, t + γṼ_i, t+1 - Ṽ_i, t) → (∑_l=t^t+k-1γ^l-tr̃_i, l + γ^kṼ_i, t+k - Ṽ_i, t).
Mediator
It is well-known that learning the joint policy in a centralized way is unfeasible due to the exponential scaling of the action space with the number of agents. For this reason, we fully factorize the joint policy of the mediator for a coalition: π^M_C(o_C^M) = ∏_i ∈ Cπ_i^M (o_i^M). The mediator's policies for all agents are parameterized with a single neural network θ^M that receives as input the observation o_i^M = (o_i, C) and the agent's index i. For convenience, we denote the i-th agent policy π_θ^M ((o_i, C, i)) as π_θ_i^M (o_i, C). The objectives for π_θ_i^M are discussed in the next subsection.
The mediator's critic receives coalition C and observations of all agents (o_i)_i ∈ N as an input and simultaneously outputs values for all agents, both in and out of the coalition. While this formulation requires centralized access to all observations and rewards, note that the critic is only required during training. Access to individual value functions for each agent will be crucial in the next subsection when estimating constrained objectives for the mediator. For convenience, we denote a critic's output Ṽ_ϕ^M(((o_i)_i ∈ N, C, i)) as Ṽ_ϕ_i^M(o, C). As usual, it is trained to minimize squared temporal difference:
L(ϕ_i^M) = 𝔼 (r̃_i, t + γṼ_ϕ_i^M(o_t+1, C_t+1) - Ṽ_ϕ_i^M(o_t, C_t))^2
Note that our implementation is intentionally minimalistic. The literature suggests a multitude of sophisticated solutions for various problems typical for MARL. Instead of limiting the mediator to a fully factorized policy, sampling from a joint policy is possible with techniques based on coordination graphs <cit.>. Credit assignment could be addressed by using specialized critics that isolate contributions of each agent to social welfare <cit.>. While these techniques could potentially improve the mediator, our focus is on introducing mediators to MARL (and in turn promoting the idea of applying mechanism design in MARL), not on searching for the best possible implementation of the mediator. We intend to isolate the implementation choices that are necessary for the mediator to function, and complicating the mediator is in conflict with this intent.
§.§ Objectives for the Mediator
Here, we first propose a mediator we denote as Naive that simply maximizes the social welfare of the agents in the coalition. We then derive the constraints necessary to incentivize the agents to commit, as well as the training procedure that approximately satisfies these constraints for k=1. Finally, we discuss how the training procedure should be modified for k > 1, as well as the potential effect of k.
Naive Mediators
The Naive mediator is oblivious to the incentives of the agents. Its goal is to greedily maximize the utilitarian social welfare for any given coalition. For the mediator's policy π_i^M for some agent i, this goal can be formulated as the following objective:
∀o_t, (C_t | i ∈ C_t): max_π_i^M∑_j ∈ C_t V_j(o_j,t, C_t)
Each value function V_j(o_j,t, C_t) is approximated with the j-th output of the mediator's critic Ṽ_ϕ_j^M(o_t, C_t). The policy π_i^M of the mediator for the agent i is trained via policy gradient:
L(θ_i^M) = -𝔼 [ ∑_j ∈ C_t (r̃_j, t + γṼ_ϕ_j^M (o_t+1, C_t+1)
- Ṽ_ϕ_j^M (o_t, C_t) ) ] logπ_θ_i^M (o_i, t, C_t)
Constraints
Since the Naive mediator greedily optimizes social welfare, there is no guarantee that its optimal policy is a mediated equilibrium, i.e., that the agents always prefer committing. To fix this, the mediator's policy should satisfy certain constraints. Intuitively, self-interested agents only commit if committing serves their best selfish interests. In RL, the quantification of an agent's interests is a value function, so committing should yield a higher value than not committing for each agent. Below we mathematically express this condition. For convenience, we divide the agents into two groups: those in and those outside the coalition.
On the one hand, an agent that enters the coalition should benefit from it and receive payoffs at least as high as it (counterfactually) would outside the coalition:
∀ i, o_i, t: 𝔼_(C_t | i ∈ C_t) [V_i(o_i, t| C_t)] ≥𝔼_(C_t | i ∈ C_t) [V_i(o_i, t| C_t ∖{i})]
On the other hand, an agent that chooses to act on its own should not be able to exploit the mediator and should receive payoffs not higher than if it had (counterfactually) committed:
∀ i, o_i, t: 𝔼_(C_t | i ∉ C_t) [V_i(o_i, t| C_t ∪{i})] ≥𝔼_(C_t | i ∉ C_t) [V_i(o_i, t| C_t)]
We respectively refer to the constraints (<ref>) and (<ref>) as Incentive-Compatibility (IC) and Encouragement (E) constraints. Notice that they are in expectation over the distribution of coalitions (generated by the agents' policies), rather than for each coalition. This is because the agents choose actions before the coalition is formed.
Also, notice that the constraints are feasible since they can be exactly satisfied by the mediator copying the policies of the agents (thus having no effect). In other words, mediated equilibrium always exists. We now discuss how to train a mediator that satisfies these constraints while maximizing social welfare.
Incentive-Compatibility Constraint
To incorporate the constraints (<ref>) into the objective (<ref>), we can apply the method of Lagrange multipliers, which results in the following dual objective:
∀o_t, (C_t | i ∈ C_t): max_π_i^M[ ∑_j ∈ C_t V_j(o_j,t, C_t) + λ^ic_i(o_i, t) V_i(o_i,t, C_t) ]
where λ^ic_i(o_i, t) ≥ 0 are Lagrange multipliers. Note that the counterfactual value V_i(o_i, t| C_t ∖{i}) from (<ref>) can be omitted from the dual due to not depending on π_i^M. This objective can be maximized using policy gradient:
L(θ_i^M) = - 𝔼 [ ∑_j ∈ C_t (r̃_j, t + γṼ_ϕ_j^M (o_t+1, C_t+1) - Ṽ_ϕ_j^M (o_t, C_t))
+ λ^ic_i (o_i, t) (r̃_i, t + γṼ_ϕ_i^M (o_t+1, C_t+1) - Ṽ_ϕ_i^M (o_t, C_t)) ]
logπ_θ_i^M (o_i, t, C_t)
To mitigate credit assignment, we deliberately incorporate only the IC constraint of the i-th agent into the objective of π_i^M but not the IC constraints of all agents. This way, the mediator only has to ensure that its actions for an agent are compatible with the incentives of this agent.
To find λ^ic_i(o_i, t), we employ dual gradient descent <cit.>:
logλ^ic_i(o_i, t) ←logλ^ic_i(o_i, t) - α [Ṽ_ϕ_i^M(o_t, C_t) - Ṽ_ϕ_i^M(o_t, C_t ∖{i})]
where α is the learning rate. The intuition behind this update is that the second term on the right-hand side on average equals zero when the constraint is satisfied exactly. In practice, we update λ^ic_i once according to this update each time we update actors and critics. To ensure that λ^ic_i is non-negative, we update its logarithm instead. Furthermore, in our experiments, we find approximation as a scalar λ^ic_i(o_i, t) = λ^ic_i to be sufficient. Other examples of applying dual gradient descent to enforce a constraint can be found in <cit.>.
Note that the update rule (<ref>) involves a counterfactual value Ṽ_ϕ_i^M(o_t, C_t ∖{i}), i.e., what the value of the i-th agent would be if it did not enter this specific coalition in this specific state. Estimating it is only possible thanks to our implementation of the mediator's critic, i.e., our choice to train it to simultaneously estimate values of agents both in and out of the coalition (see Section <ref>).
Encouragement Constraint Same derivations as for the IC constraint apply to the E constraint. The constraints (<ref>) for each agent outside the coalition can be incorporated into the policy gradient (<ref>) of each agent in the coalition i ∈ C_t:
L(θ_i^M) = - 𝔼 [ ∑_j ∈ C_t (r̃_j, t + γṼ_ϕ_j^M (o_t+1, C_t+1) - Ṽ_ϕ_j^M (o_t, C_t) )
- ∑_j ∉ C_tλ^e_j (o_j, t) (r̃_j, t + γṼ_ϕ_j^M (o_t+1, C_t+1) - Ṽ_ϕ_j^M (o_t, C_t)) ]
logπ_θ_i^M (o_i, t, C_t)
The Lagrange multipliers λ^e_j(o_i, t) are also learned via dual gradient descent:
logλ^e_j(o_j, t) ←logλ^e_j(o_j, t) - α [Ṽ_ϕ_j^M(o_t, C_t ∪{j}) - Ṽ_ϕ_j^M(o_t, C_t)]
The Lagrange multipliers are also as scalars: λ^e_j(o_i, t)=λ^e_j.
Constrained mediators
Both IC and E constraints should be applied simultaneously to train the Constrained mediator.
L(θ_i^M) = - 𝔼 [ ∑_j ∈ C_t (r̃_j, t + γṼ_ϕ_j^M (o_t+1, C_t+1) - Ṽ_ϕ_j^M (o_t, C_t))
+ λ^ic_i (o_i, t) (r̃_i, t + γṼ_ϕ_i^M (o_t+1, C_t+1) - Ṽ_ϕ_i^M (o_t, C_t))
- ∑_j ∉ C_tλ^e_j (o_j, t) (r̃_j, t + γṼ_ϕ_j^M (o_t+1, C_t+1) - Ṽ_ϕ_j^M (o_t, C_t)) ]
logπ_θ_i^M (o_i, t, C_t)
It is interesting to note that the loss (<ref>), obtained as a dual of a constrained objective, is also a (negated) policy gradient for a mixture of rewards [∑_j ∈ C_tr̃_j, t + λ^ic_i, tr̃_i, t - ∑_j ∉ C_tλ^e_j, tr̃_j, t].[To see this, apply the definition of the value function, change the order of expectation and summation in the value functions, and rearrange terms.] One implication is that socially beneficial equilibria can be found by simply optimizing a weighted sum of rewards, albeit with non-stationary weights. Another implication is that the E constraint effectively lowers the rewards of agents outside the coalition. We stress that this does not mean that the agents outside the coalition are punished, but rather that the agents inside the coalition cooperate less frequently if the coalition is not full. This ensures that an agent cannot deviate to enjoy the cooperation of others, i.e., deters free-riding.
Commitment Window k
When k=1, unanimous commitment requires both constraints to be satisfied at each time-step, which may limit the margin of social welfare improvement over selfish agents. In contrast, when k > 1, the constraints only need to be satisfied on average over the periods of k time-steps, since agents commit for these periods. On the example of IC constraint (<ref>), k > 1 requires constraint reformulation:
∀ i, (o_i, t| t k = 0):
𝔼_(C_t | i ∈ C_t) [V_i(o_i, t| C_t)] ≥𝔼_(C_t | i ∈ C_t) [V_i(o_i, t| C_t ∖ i)]
where the coalition C_t is fixed for the next k time-steps: C_t = C_t+1 = … = C_t+k-1. This constraint implies that the same λ^ic_i(o_i, t) is used in the dual objective (<ref>) at the time-steps t ≤ l < t + k. The update rules of the Lagrange multipliers are modified accordingly:
log λ^ic_i(o_i, t) ←logλ^ic_i(o_i, t)
- α∑_l=t^t+k-1γ^l-t [Ṽ_ϕ_i^M(o_l, C_l) - Ṽ_ϕ_i^M(o_l, C_l ∖{i})]
log λ^e_j(o_j, t) ←logλ^e_j(o_j, t)
- α∑_l=t^t+k-1γ^l-t [Ṽ_ϕ_j^M(o_t, C_t ∪{j}) - Ṽ_ϕ_j^M(o_t, C_t)]
In the extreme case when the commitment window covers the entire episode (k = inf) and the starting state s_0 is deterministic, the dependency of Lagrange multipliers on the observation can be dropped altogether. We respectively denote the mediators that require constraint satisfaction each time-step (k=1), each several time-steps (k>1), and each episode (k=inf) as ex-post, interim, and ex-ante.
Notes on the training process.
First, in our implementations of the mediator, no specific learning dynamics are assumed to be adopted by the agents. We only train the agents within the same RL framework as the mediator out of convenience. The only assumption is that the agents are myopically rational, i.e., increase the probability of the beneficial actions as they learn (and thus commit when are properly incentivized to do so). This is unlike approaches to conditional cooperation like LOLA that assume specific learning dynamics.
Second, because the mediator's policy is trained for each coalition rather than only the full coalition (see equations <ref>, <ref>), our objective formulation for the mediator is stronger than finding a socially beneficial mediated equilibrium. Instead, the mediator learns to maximize social welfare while satisfying constraints even for coalitions that are not full. On the one hand, this makes cooperation induced by the mediator robust to rare defectors. On the other hand, discovering that simultaneous commitment is beneficial requires some (possibly spontaneous) coordination of the agents, which is easier if non-unanimous commitment is also incentivized. In short, this property is useful during both training and execution.
§ EXPERIMENTS
In this section, we experimentally investigate the capabilities of different proposed versions of the mediator in Prisoner's Dilemma (PD), Public Good Game (PGG), and iterative PGG. Technical details and hyperparameters, as well as additional experiments, are reported in the Appendix.
In the Introduction, we have extensively discussed the issues that arise in mixed MARL, i.e., finding policies that maximize social welfare and incentivizing agents to follow these policies, which we respectively denote as Efficiency (Eff) and Incentive-Compatibility (IC) for convenience. While unconditional cooperation only addresses Eff, conditional cooperation addresses both and thus is substantially more difficult to resolve. Given that unconditional cooperation is already widely explored in the literature, we primarily focus on IC. To this end, we empirically investigate matrix games and their sequential variants that are trivial from the perspective of Eff but clearly highlight the specific challenges when dealing with IC, as well as the power of mediators in dealing with these challenges.
A consequence of this decision is that comparing with baselines becomes redundant. We design each investigated game such that we know what the socially optimal policy looks like and how much social welfare it achieves. By design, this policy is trivial to find for any solution to unconditional cooperation. We note that the next logical step is validating our mediators in more complex environments where both Eff and IC issues are non-trivial to solve, in which case comparing with baselines also makes sense. We leave this direction as future work.
§.§ Prisoner's Dilemma
The payoff matrix is presented in Table <ref> (a). Despite the cooperation being mutually beneficial, defecting is the dominant strategy and mutual defection is the only equilibrium. Table <ref> (b) presents socially optimal mediated equilibrium in PD implemented by a mediator that cooperates only when both agents commit. Notice that despite this mediator greedily maximizing social welfare for each coalition, both IC and E constraints are satisfied for both agents because committing is a dominant strategy. Therefore, one can expect even a Naive mediator to establish cooperation.
The results are presented in Table <ref>. In accordance with our expectations, in the absence of the mediator, both agents converge to defection. When the game is augmented with a Naive mediator, the agents almost exclusively commit, and the mediator learns to only cooperate when both agents commit.
§.§ Public Good Game
In two-player games, the mediator either acts for one of the agents, in which case the best it can do is maximize the agent's welfare, or acts for both agents, in which case no agent is outside the coalition. The consequence is that mediator's actions are never beneficial for agents outside the coalition and therefore the Encouragement constraint is always satisfied in two-agent games. This is not the case in games with more than two agents where as soon as some two agents start cooperating, the rest of the agents may try to exploit their cooperation for higher personal gains. This issue is known as free-riding. Using PGG as an example, we illustrate how free-riding emerges when the game is augmented with a Naive mediator, and how this issue can be mitigated by enforcing the Encouragement constraint on the mediator's policy. We refer to such a mediator as Constrained. We first investigate one-step PGG and then move on to our variant of iterative PGG.
One-step Public Good Game
N agents are endowed with a unit of utility and have a choice whether to contribute it to the public good or to defect. The public good is formed as the total contribution of agents, multiplied by some 1 < n < N, and is uniformly redistributed among all agents. The reward of each agent is r_i = n/N∑_j=1^N c_j - c_i, where c_i=1 iff i contributes.
Let N=3, n=2. Like in PD, the dominant strategy in the absence of a mediator is to defect. Consider the Naive mediator. Its strategy is to contribute with all agents in the coalition if it consists of at least two agents: π^M(c | |C| = 1)=0, π^M(c | |C| ≥ 2)=1. Given this mediator, the equilibrium is for only two of three agents to commit and form a coalition. To see this, consider the perspectives of all agents. From the perspective of an agent in this coalition, committing causes it to get its share of the public good equal to r(m)=1/3, whereas defecting would lower the reward to r(d)=0 by causing the other agent to defect. From the perspective of the agent outside the coalition, defecting lets it enjoy both its endowment and its share of the public good r(d)=4/3, which is better than committing with other agents and receiving r(m)=1. Now consider the Constrained mediator. To deter free-riding, the Constrained mediator contributes with a specific probability when the coalition consists of two agents: π^M(c | |C| = 1)=0, π^M(c | |C| = 2)=0.75, π^M(c | |C| = 3)=1. On the one hand, this results in a lower expected reward for the two agents in the coalition: r(m)=0.25. On the other hand, this also lowers the expected reward of the agent outside the coalition to the point where it is indifferent whether it commits or not: r(d) = r(m) = 1. Notice that further decreasing π^M(c | |C| = 2) results in lower social welfare for the agents in the coalition, whereas increasing it encourages the third agent to defect. Thus, this mediator implements the socially optimal equilibrium. We now verify that our constrained objective allows training such a mediator.
The results are presented in Table <ref>. For N=3, n=2, the learned policies match the equilibrium derived above: without mediator agents always defect, the Naive mediator encourages two agents to commit but is exploited by the third agent, and the Constrained mediator converges to the socially optimal equilibrium. It is especially surprising that the Constrained mediator learns the optimal mixed policy so precisely, which is only possible in a non-stationary environment where the moment the mediator deviates, it is corrected by the agents trying to exploit it. For settings N=10, n=2 and N=25, n=5, the picture is generally the same: only the Constrained mediator encourages commitment from all agents by learning a reciprocal policy that punishes free-riding.
Iterative Public Good Game
The game lasts for 10 turns. In the beginning, each agent is endowed with 1 unit of utility. An agent's observation is a tuple of its current endowment and the turn number. Each turn, each agent chooses whether to contribute 50% of its current endowment to the public good, and the resulting payoffs are preserved throughout the turns. This creates a compounding effect from contributing to the public good that can be exploited for a massive increase in welfare over the duration of the game if all agents consistently contribute. On the other hand, the state space is no longer trivial, and more complex strategies may emerge.
The results are presented in Table <ref>. For N=3, n=2, the Naive ex-post (k=1) mediator behaves similarly to the Naive mediator in one-step PGG: it consistently encourages two agents to commit but is exploited by the third agent. The Constrained ex-post mediator mitigates this issue, but only partially, which might be due to our approximation of Lagrange multipliers as constants, or simply due to the limited capabilities of ex-post mediators. Conversely, both the Naive and the Constrained ex-ante (k=10) mediators reliably encourage all three agents to commit and establish robust cooperation. For N=10, n=5, the results are similar, but since the game is more complex, even the Constrained ex-ante mediator is not able to ensure full commitment.
§.§ Prisoner's Dilemma with Sacrifice
Prisoner's Dilemma with Sacrifice (PDS) is an asymmetric modification of PD. The payoff matrix is presented in Table <ref> and differs from PD in that the second player has an additional action available, the effect of which is to sacrifice its payoffs for the higher utilitarian social welfare.
Like in PD, when agents play PDS without a mediator, the dominant action for both is to defect. Unlike PD, this does not change when the game is augmented with the Naive mediator. Since the Naive mediator greedily maximizes social welfare, it sacrifices the payoffs of the second agent, which encourages the second agent to defect (Table <ref>). This is an example of the incompatibility of an agent's and the mediator's incentives. To fix this, the IC constraint should be enforced, which will cause the mediator to choose mutual cooperation over sacrificing an agent's payoffs and, in turn, encourage the agents to commit (Table <ref>). This Constrained mediator implements the mediated equilibrium, but due to the asymmetry of the game, its strategy is not socially optimal. Note that this is not the only mediator that satisfies the IC constraint, as mixing mutual cooperation with sacrificing the second agent's payoffs may also be viable while further improving social welfare. The mediated equilibrium that maximizes social welfare is to equally mix (c, c) and (·, s) outcomes since at this point the second agent is indifferent to whether it commits or defects.
We now investigate how our implementations of mediators behave in PDS. The experimental results are presented in Table <ref>.
In accordance with our expectations, agents converge to mutual defection both without a mediator and with a Naive mediator. The Naive mediator learns to sacrifice the second agent's payoffs while defecting with the first agent, which causes the second agent to always defect. The first agent is then indifferent to whether it defects itself or commits to the mediator that defects for it.
The constrained mediator performs much better. As discussed earlier, its optimal strategy is to equally mix (c, c) and (·, s) outcomes. Its learned strategy is close to the optimal but gives a slight edge to the (c, c) outcome to additionally encourage the commitment of the second agent. As a result, both agents almost always commit. The converged dynamics result in social welfare of approximately 4.35, which is close to the maximal achievable social welfare of 4.5.
§ CONCLUSION
In this paper, we challenge the dominant perspective in the MARL literature on the problem of cooperation in mixed environments and argue for convergence to equilibria as its essential property. As a novel solution for conditional cooperation, we apply mediators. Specifically, we adapt mediators to Markov games through the formalism of Markov mediators, describe how to practically implement them, formulate a constrained objective that both improves social welfare and encourages agents to commit, solve this objective using the method of Lagrange multipliers and dual gradient descent, and experimentally verify the effectiveness of our implementation in the matrix and iterative games.
Despite our contributions, we only scratch the surface of the mediators' potential for MARL. First, to get a clear picture of mediators' behaviour and advantages, we experiment with relatively simple games, but it would also be exciting to apply mediators to larger-scale environments. Second, our formulation of mediator implies its centralized execution as a way to ensure that agents cannot misreport their commitment, but as <cit.> point out, it is interesting whether cryptographic technologies could be applied as an alternative. Third, in our implementation, the mediator acts based on the same information as an agent, but providing the mediator with more information could serve as an additional incentive to commit. Fourth, the literature also explores mediators that recommend actions instead of acting on behalf of agents <cit.> and adapting such mediators to MARL presents a separate challenge. On a final note, our Markov mediator is only one example of ideas from economics to MARL, and we are excited for future research that intersects these two fields.
This research was supported in part through computational resources of HPC facilities at HSE University, Russian Federation. Support from the Basic Research Program of the National Research University Higher School of Economics is gratefully acknowledged.
aamas2023/ACM-Reference-Format
§ ADDITIONAL EXPERIMENTS
§.§ Two-step Asymmetric Prisoner's Dilemma
This modification of PD lasts for two time-steps, the payoff matrix for both of which is provided in Table <ref>. The second state coincides with PD in the main text, but the first state is different in that mutual cooperation is only beneficial for the second agent while still providing maximal social welfare. Like in one-step PD, the only equilibrium is to defect for both agents in the absence of mediator. The ex-post Naive mediator chooses mutual cooperation in both states, which both agents agree to in the second state but only the second agent agrees to in the first state. The ex-ante Naive mediator also chooses mutual cooperation in both states, but the agents can only commit at the first state for the duration of the game. In this case, commitment is beneficial for both agents since the cumulative reward over two time-steps is higher from mutual cooperation than from mutual defection even for the first agent.
The experimental results are presented in Table <ref> and are fully in accordance with our expectations. Agents always defect without mediator; the second agent commits in both states while the first agent only commits in the second state to the ex-post Naive mediator; both agents commit in the first state to the ex-ante Naive mediator. This experiment clearly demonstrates how ex-ante mediator has more potential to maximize social welfare because it only requires to satisfy the constraints (to be compatible with the agents' incentives) on average.
§ TECHNICAL DETAILS AND HYPERPARAMETERS
Prisoner's Dilemma
In PD, agents' actor and critic receive a constant dummy state, since it is a one-step game with single state. Mediator's actor also receives coalition and ID of the agent that the mediator acts for. Mediator's critic only receives coalition and predicts values for both agents simultaneously. The algorithm of inference is the same in all environments: first, agents choose an action, then if any of them chose to cooperate, its ID alongside with coalition are passed to the mediator, which takes actions for these agents. After that, actions are sent to the environment to obtain rewards. The training is performed in the usual manner for Actor-Critic algorithms. The final result is averaged over 50 seeds.
Prisoner's Dilemma with Sacrifice
We bound logλ to [-4; 4] to avoid cases when the constraint is completely ignored or completely dominates the main objective. The rest of the details are similar to the [para:pd]PD. The final result is averaged over 50 seeds.
Two-step Asymmetric Prisoner's Dilemma
Agents' actors and critics receive the time-step t. Also, actors receive an additional value s ∈ [-1, 0, 1] that indicates the coalition status of an agent: ["cannot join the coalition", "can choose to join the coalition", "in coalition, acts according to mediator"]. Depending on the value, we modify the logits predicted by actor according to the following strategies. For s=-1, we mask the value corresponding to the action "commit" by changing it to -∞ (this happens when k>1, t k ≠ 0, and the agent did not commit the last time it could). The same applies for s=1, but in this case we mask all actions but "commit" (this happens when k>1, t k ≠ 0, and the agent committed the last time it could). In case of s=0, neither mask is applied as all actions are available. During training, we mask logits in the same manner according to the collected trajectories to ensure unbiased on-policy learning. Agents are only trained on the experience where s=-1 or s=0 because when s=1, agents effectively act off-policy (as mediator acts for them). Note that s = 0 always if k = 1.
Since we use ex-ante mediator, we employ a k-step learning procedure for agents explained in the main text under section "Practical Implementations of Agents and Mediator". The rest of the details are similar to the [para:pd]PD. The final result is averaged over 50 seeds.
Public Good Game
Considering the high number of agents in PGG N∈{3, 10, 25}, we changed the multi-headed mediator's critic. Instead, the critic takes only the number of agents in coalition (normalized by N), and outputs value V corresponding to each agent in the coalition. Likewise, the mediator's actor also doesn't return a unique policy for each agent. Instead, it outputs the same policy for all agents in the coalition. This way, we utilize the symmetry of the game to reduce the space of solutions. It is important to note that each agent still has its own actor and critic networks that do not share parameters with other agents. The rest of the details are similar to the [para:pd]PD. The final result is averaged over 10 seeds.
Iterative Public Good Game
In the Iterative PGG, we provide the private observation o_i, t = (e_i, t, t) to agents' actors and critics, where e_i, t is the i-th agent current endowment. Mediator's actor receives a tuple (o_i, t, C_t, i) consisting of agent's private observation, coalition, and agent's ID, and outputs the policy for this agent. Mediator's critic receives a tuple (s_t=(o_i, t)_i ∈ N, C) consisting of the global state and the coalition and returns a vector of values V of all agents. The rest of the details (including masking logits for k > 1) are the same as in [para:tspd]Two-step PD. The final result is averaged over 10 seeds.
Hyperparameters
All hyperparameters are reported in Table <ref>.
|
http://arxiv.org/abs/2306.02589v1
|
20230605043332
|
DAGrid: Directed Accumulator Grid
|
[
"Hang Zhang",
"Renjiu Hu",
"Xiang Chen",
"Rongguang Wang",
"Jinwei Zhang",
"Jiahao Li"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"eess.IV",
"eess.SP"
] |
Machine Learning Force Fields with Data Cost Aware Training
[
July 31, 2023
===========================================================
Recent research highlights that the Directed Accumulator (DA), through its parametrization of geometric priors into neural networks, has notably improved the performance of medical image recognition, particularly in situations confronted with the challenges of small and imbalanced datasets.
Despite the impressive results of DA in certain applications, its potential in tasks requiring pixel-wise dense predictions remains largely unexplored.
To bridge this gap, we present the Directed Accumulator Grid (DAGrid), an innovative approach allowing geometric-preserving filtering in neural networks, thus broadening the scope of DA's applications to include pixel-level dense prediction tasks.
DAGrid utilizes homogeneous data types in conjunction with designed sampling grids to construct geometrically transformed representations, retaining intricate geometric information and promoting long-range information propagation within the neural networks.
Contrary to its symmetric counterpart, grid sampling, which might lose information in the sampling process, DAGrid aggregates all pixels, ensuring a comprehensive representation in the transformed space.
The parallelization of DAGrid on modern GPUs is facilitated using CUDA programming, and also back propagation is enabled for deep neural network training.
Empirical results clearly demonstrate that neural networks incorporating DAGrid outperform leading methods in both supervised skin lesion segmentation and unsupervised cardiac image registration tasks.
Specifically, the network incorporating DAGrid has realized a 70.8% reduction in network parameter size and a 96.8% decrease in FLOPs, while concurrently improving the Dice score for skin lesion segmentation by 1.0% compared to state-of-the-art transformers.
Furthermore, it has achieved improvements of 4.4% and 8.2% in the average Dice score and Dice score of the left ventricular mass, respectively, indicating an increase in registration accuracy for cardiac images.
These advancements in performance indicate the potential of DAGrid for further exploration and application in the field of medical image analysis.
The source code is available at <https://github.com/tinymilky/DeDA>.
§ INTRODUCTION
Despite the successful application of neural networks in diverse medical image tasks such as physics-based inverse problems <cit.>, deformable medical image registration <cit.>, and lesion segmentation <cit.>, adapting a well-established backbone architecture <cit.> to different tasks is often challenging.
This challenge stems not only from the domain shift <cit.> due to task variations and diverse acquisition protocols, but more significantly from the domain-specific nature and data limitations.
These conditions often lead to a failure of standard networks to extract unique task-specific information, such as the geometric structure of the target object when data is scarce.
Therefore, considering the domain-specific nature of medical images associated with different diseases, the challenge of how to incorporate useful inductive biases (priors) into neural networks for enhanced medical image processing remains unresolved.
Many medical imaging tasks often involve processing a primary target object, such as the left ventricle in cardiac image registration <cit.> or white matter hyperintensities in brain lesion segmentation <cit.>.
These objects typically present strong geometric patterns, necessitating specialized image transformation techniques to capture these patterns.
Directly incorporating geometric priors into the network can help mitigate the limitations of plain neural networks which often struggle to capture such patterns without ample training data.
The spatial transformer <cit.>, also known as Differentiable Grid Sampling (GS), is a learnable network module that facilitates image transformation within the neural network.
Given a source feature map 𝐔∈ℝ^C× H × W, a sampling grid 𝐆∈ℝ^2× H' × W'=(𝐆^x, 𝐆^y) that specifies pixel locations to extract from 𝐔, and a kernel function 𝒦() that defines the image interpolation, the output value at a specific position (i,j) in the target feature map 𝐕∈ℝ^C× H' × W' can be expressed as: 𝐕_ij^c = ∑_n^H∑_m^W𝐔_nm^c𝒦(𝐆_ij^x,n)𝒦(𝐆_ij^y,m).
The process described by the equation can be denoted as a function of tensor mapping, 𝒮(𝐔;𝐆,𝒦): ℝ^C× H × W→ℝ^C × H' × W', where 𝐔,𝐆, and 𝒦 represent the source feature map, sampling grid, and sampling kernel, respectively.
Here, H× W denotes the spatial size of 𝐔, while H'× W' represents the spatial size of 𝐕.
While GS has been successfully employed across a variety of vision applications, such as deformable medical image registration <cit.>, object detection <cit.>, optical flow estimation <cit.>, image classification <cit.>, and image translation <cit.>, it falls short in handling a specific class of image transformations that involve Directed Accumulation (DA) <cit.>.
Contrary to GS, where the transformed representation may risk information loss during the sampling process when the mapping is not one-to-one, DA forms a new representation by aggregating information from all pixels within the feature map.
With a slight adaptation of notations, we can formulate DA as: 𝐕_ij^c = ∑_n^H∑_m^W𝐔_nm^c𝒦(𝐆_nm^x,i)𝒦(𝐆_nm^y,j).
It's worth highlighting the difference between GS and DA: In GS, the sampling grid 𝐆 and the target feature map 𝐕 share the same spatial dimensions and coordinates are iterated over in the loop.
Conversely, in DA, 𝐆 shares the same spatial dimensions as the source feature map 𝐔 and it's the elements of the sampling grid that are iterated over in the loop, as illustrated in Fig. <ref>.
Given these modifications, DA allows for the parameterization of geometric shapes like rims <cit.> into neural networks as learnable modules.
In this paper, we present the Directed Accumulator Grid (DAGrid), which has been specifically designed to extend the applications of DA to pixel-level dense prediction tasks.
DAGrid consists of three main components: grid creation, grid processing, and grid slicing (terminology that aligns with <cit.>).
Through the coordinated functioning of these three components, DAGrid endows neural networks with three valuable characteristics for modeling geometric shapes: it enables explicit long-range information propagation, retains more information from the original image, and facilitates geometric-preserving filtering.
These properties significantly enhance the capability of neural networks in handling medical tasks involving geometric patterns.
We have implemented DAGrid on modern GPUs using CUDA programming and have enabled back propagation for deep neural network training.
Here we highlight the effectiveness of DAGrid across two dense image mapping medical applications: skin lesion segmentation and cardiac image registration.
Our contributions are threefold.
Firstly, we present DAGrid, an extension of DA that incorporates geometric filtering within neural networks, thereby expanding its utility to dense pixel-wise mapping tasks.
Secondly, we instantiate DAGrid as a circular accumulator (DA-CA) for cardiac image registration, demonstrating its ability to enhance the registration of inner-objects with specific geometric patterns.
This resulted in a marked improvement of 4.4% in the average Dice score and 8.2% in the Dice score of the left ventricular mass.
We also employ DAGrid as a polar accumulator (DA-PA) in skin lesion segmentation tasks.
Compared to conventional methods, DA-PA excels in preserving detailed information and outperforms transformer-based networks, achieving a 1.0% improvement in Dice score, along with a substantial reduction of 70.8% in network parameter size and 96.8% in FLOPs.
§ RELATED WORK
Learning to Accumulate
The Hough Transform (HT) <cit.>, along with its subsequent variants or improvements, are widely used methods that leverage the value accumulation process and have been further enhanced in the neural network framework.
Deep Voting <cit.> employs neural networks to generate Hough votes for nucleus localization in microscopy images.
Hough-CNN <cit.> applies Hough voting to improve MRI and ultrasound image segmentation performance.
Network-predicted Hough votes <cit.> have achieved state-of-the-art performance in object detection within 3D point clouds.
Memory U-Net <cit.> utilizes CNNs to generate Hough votes for lesion instance segmentation.
The central idea behind these methods is to use learning models to produce Hough votes, which maps local evidence to an application-specific transformed space.
Learning in Accumulation
Local Convolution filters in the transformed space, also known as accumulator space, aggregates structural features such as lines <cit.> and rims <cit.> in the image space, facilitating the incorporation of priors into networks.
This accumulator space convolution, as opposed to attention-based methods <cit.>, explicitly captures long-range information through direct geometric parameterization.
Examples of this approach include Lin et al. <cit.>, who use line parameterization as a global prior for straight line segmentation, and Zhao et al. <cit.>, who integrate the accumulator space into the loss function for enhanced semantic line detection.
Interestingly, semantic correspondence detection has seen improvements in both 3D point clouds <cit.> and 2D images <cit.> through the use of convolutions in the accumulator space.
Zhao et al. <cit.> utilize HT to combine the Manhattan world assumption and latent features for 3D room layout estimation.
Originally developed by Chen et al. <cit.> to accelerate the bilateral filter <cit.>, the bilateral grid has been further employed in neural networks for applications such as scene-dependent image manipulation <cit.> and stereo matching <cit.>.
Learning with Geometric Priors
The requirement of substantial datasets for training deep networks <cit.> poses challenges, especially for certain data-limited clinical applications.
For instance, despite being trained on some of the largest datasets available, large vision model SAM <cit.> still underperform specialized models in many medical imaging tasks, even with fine-tuning <cit.>.
In contrast, incorporating geometric or domain-specific priors can be advantageous.
Techniques such as distance transformation mapping <cit.> and spatial information encoding <cit.> have been successfully used to develop edge-aware loss functions <cit.>, network layers with anatomical coordinates <cit.> as priors, and spatially covariant network weight generation <cit.>.
Polar or log polar features have found wide application in tasks such as modulation classification <cit.>, rotation- and scale-invariant polar transformer networks <cit.>, object detection <cit.>, correspondence matching <cit.>, and cell detection <cit.> and segmentation <cit.>.
Moreover, explicit geometric shapes like straight lines, concentric circles, and rims have facilitated semantic line detection <cit.>, rim lesion identification <cit.>, and lithography hotspot detection <cit.>.
§ METHODS
In this section, we delineate the formulation of DAGrid.
This differentiable module initiates by transforming an input feature map based on the specific sampling grids into a transformed accumulator space, referred to as grid space.
Subsequently, operations such as convolution are performed in this grid space, followed by slicing back to the original feature map.
In the case of multi-channel input, the same transformation process is applied to each channel.
The DAGrid is composed of three components: grid creation, grid processing, and grid slicing.
The synergy of these components facilitates geometric-preserving filtering, thus enhancing medical applications that rely on geometric priors.
§.§ DAGrid Creation
Given a source feature map 𝐔∈ℝ^C× H × W, a target feature map 𝐕∈ℝ^C× H' × W', a set of sampling grids 𝒢 = {𝐆[k] ∈ℝ^2× H × W=(𝐆^x[k], 𝐆^y[k]) | k ∈ℤ^+, 1 ≤ k ≤ N } (N≥ 1 is the number of grids), and a kernel function 𝒦(), the output value of a particular cell (i,j) at the target feature map 𝐕 can be written as follows:
𝐕_ij^c = ∑_k^N∑_n^H∑_m^W𝐔_nm^c𝒦(𝐆_nm^x[k],i)𝒦(𝐆_nm^y[k],j),
where the kernel function 𝒦() can be replaced with any other specified kernels, e.g. integer sampling kernel δ(⌊𝐆_nm^x+0.5⌋-i)·δ(⌊𝐆_nm^y+0.5⌋-j) and bilinear sampling kernel max(0,1-|𝐆_nm^x-i|) ·max(0,1-|𝐆_nm^y-j|).
Here ⌊ x+0.5⌋ rounds x to the nearest integer and δ() is the Kronecker delta function.
The Eq. <ref> can be denoted as a function mapping, 𝒟(𝐔;𝒢,𝒦): ℝ^C× H × W→ℝ^C × H' × W'.
To enable geometric-preserving filtering in the DAGrid, it's important to monitor the number of pixels (or a weight) corresponding to each grid cell.
Thus, during grid creation, we store homogeneous quantities (𝐕ij^c·𝐖ij^c,𝐖_ij^c).
Here, 𝐖 can be derived from 𝐖=𝒟(𝐉;𝒢,𝒦), where 𝐉 is a tensor of ones.
This representation simplifies the computation of weighted averages: (w_1v_1,w_1)+(w_2v_2,w_2)=(w_1v_1+w_2v_2,w_1+w_2).
Normalizing by the homogeneous coordinates (w1+w2) yields the anticipated averaging of v_1 and v_2, weighted by w_1 and w_2.
Conceptually, the homogeneous coordinate 𝐖 represents the importance of its associated data 𝐕.
§.§ DAGrid Processing and Slicing
Any function f that inputs a tensor and outputs another can process the accumulator grid 𝐕̃=f(𝐕).
There's no requirement for 𝐕̃ and 𝐕 to be of the same size, as long as it suits the grid slicing.
For image processing, f could be a bilateral filter, a Gaussian filter, or a non-maximal suppression operator.
Within a neural network, f could be a learnable convolutional layer or even a complete backbone network such as U-Net <cit.>.
After grid processing, we need to extract the feature map back by slicing.
Slicing is the critical DAGrid operation that yields piece-wise smooth output in terms of the geometric shape.
Given a processed accumulator grid 𝐕̃∈ℝ^C× H' × W' and a sampling grid set 𝒢 = {𝐆[k] ∈ℝ^2× H × W=(𝐆^x[k], 𝐆^y[k]) | k ∈ℤ^+, 1 ≤ k ≤ N } (it is not necessary, but usually the set is the same as the set in DAGrid creation), the slicing can be formulated as follows:
𝐔̃_ij^c = ∑_k^N∑_n^H'∑_m^W'𝐕̃_nm^c𝒦(𝐆_ij^x[k],n)𝒦(𝐆_ij^y[k],m),
where 𝐔̃ is the feature map that has been sliced back, and we use notation 𝒮((𝐕̃;𝒢,𝒦): ℝ^C× H' × W'→ℝ^C × H × W) to signify Eq. <ref>.
Regardless of the grid processing that occurs in the intermediate stages, it is evident that the processes represented by Eq. <ref> and Eq. <ref> are symmetrical to each other.
It's the processing within the grid, between creation and slicing, that makes geometric-preserving operations feasible.
§.§.§ Circular and Polar Accumulator Grid
The distinction between the circular, polar or any other accumulators originates from their respective sampling grids, which are essential in delineating the geometric transformation pertinent to specific applications.
In essence, we can establish a novel accumulator grid by defining a new geometric transformation via a set of custom sampling grids.
Utilizing a bilinear sampling kernel, this set of sampling grids can also be learned or fine-tuned through back propagation.
For simplicity, in the remainder of this paper, we will represent the feature map using spatial dimensions only.
Circular Accumulator Grid
Let 𝐔∈ℝ^H × W denote the input feature map. The magnitude of image gradients can be calculated as 𝐒 = √(𝐔_x⊙𝐔_x + 𝐔_y⊙𝐔_y), where ⊙ represents the Hadamard product, 𝐔_x = ∂𝐔∂ x, and 𝐔_y = ∂𝐔∂ y. Image gradient tensors 𝐔_x and 𝐔_y can be efficiently calculated using convolution kernels like the Sobel operator.
Normalized gradients can be obtained as 𝐔̂_x = 𝐔_x𝐒+ϵ and 𝐔̂_y = 𝐔_y𝐒+ϵ, where ϵ is a small real number added to avoid division by zero.
Mesh grids of 𝐔 are denoted as 𝐌_x (value range: (0,H-1)) and 𝐌_y (value range: (0,W-1)).
A set of sampling grids can be generated as
𝒢 = {𝐆[k]=(𝐆^x[k], 𝐆^y[k]) | k ∈ℤ^+, 1 ≤ k ≤ N },
where 𝐆[k] ∈ℝ^2× H × W, 𝐆^x[k]=k𝐔̂_x+𝐌_x, 𝐆^y[k]=k𝐔̂_y+𝐌_y, and N=max(H,W).
Let 𝒢^- denote the set of sampling grids with gradients in the opposite direction, where 𝐆^x[k]^-=-k𝐔̂_x+𝐌_x, 𝐆^y[k]^-=-k𝐔̂_y+𝐌_y.
The circular accumulator grid can be formulated as 𝐕_s = 𝒟(𝐒;𝒢,𝒦)-𝒟(𝐒;𝒢^-,𝒦) and 𝐕_u = 𝒟(𝐔;𝒢,𝒦)-𝒟(𝐔;𝒢^-,𝒦), where 𝐕_s and 𝐕_u represent the accumulated feature and magnitude value, respectively.
In this scenario, the bilinear sampling kernel is used to track gradients for the sampling grids.
Polar Accumulator Grid
Let 𝐔∈ℝ^H × W be the input feature map, 𝐌^x ∈ℝ^H× W (value range: (0,H-1)) and 𝐌^y ∈ℝ^H× W (value range: (0,W-1)) be the corresponding mesh grids, (x_c,y_c) be the coordinate of image center,
Let the mesh grids of 𝐔 be 𝐌^x (value range: (0,H-1)) and 𝐌^y (value range: (0,W-1)), the coordinate of image center be (x_c,y_c), the value of sampling grid in the radial direction 𝐆^x at position (i,j) can be obtained as: 𝐆^x_ij = √((𝐌^x_ij - x_c)^2 + (𝐌^y_ij - y_c)^2) / s_r, where s_r is the sampling rate in the radial direction.
Similarly, the value of sampling grid in the angular direction 𝐆^y at position (i,j) can be obtained as: 𝐆^y_ij = arctan((𝐌^x_ij - x_c)^2 + (𝐌^y_ij - y_c)^2 + π) / s_θ, where s_θ is the sampling rate in the angular direction, and addition of π is to shift all values into the range of (0,2π).
The process of polar accumulation for each channel is the same, requiring just one sampling grid.
Given 𝒢{𝐆=(𝐆^x,𝐆^y)}, we can generate the polar accumulator grid through Eq. <ref> as 𝐏=𝒟(𝐔;𝒢,𝒦) ∈ℝ^H_r× W_ψ, where H_r and W_ψ are determined by the sampling rate s_r and s_θ.
After processing 𝐏 with a neural network f, we can slice the processed grid 𝐏̃=f(𝐏) back into the image space as 𝐔̃=𝒮(𝐏̃;𝒢,𝒦).
It's important to note that before processing 𝐏 with f, we use homogeneous coordinates for normalization.
§.§ DAGrid Functionality
Before delving into the experiments, we intuitively demonstrate three valuable characteristics of DAGrid for modeling geometric shapes in neural networks: explicit long-range information propagation, retaining more information from the original image, and geometric-preserving filtering.
Explicit Long-Range Information Propagation
As illustrated in the top right panel of Fig. <ref>, during the forward pass, DAGrid is capable of transferring values from the input feature map to specific cells in the accumulator grid using the sampling grids.
Subsequently, during the backward pass, gradient values initially found in the specified accumulator cell flow along the same route to the input feature map.
This process allows DAGrid to achieve explicit long-range information propagation.
As illustrated in Fig. <ref>, with the increase in radius from 3 to 15, the magnitude information progressively propagates towards the center of the left ventricle.
Retaining more Information from the Original Image
In traditional grid sampling, each cell in the output feature map derives its value from the corresponding cell in the input feature map.
This might lead to potential information loss, particularly when the mapping between input and output is not one-to-one.
Conversely, in the directed accumulation process, all pixel values from the input are used to construct the accumulator grid.
While the values undergo smoothing during the normalization of homogeneous coordinates, this method ensures that all values from the input contribute to the accumulation process, providing the potential for information recovery.
By harnessing the power of neural networks, we can augment the slicing process with implicit parametrization, substituting the conventional bilinear sampling with a learnable linear combination module.
This adjustment facilitates optimal utilization of the preserved information, leading to an improvement in the overall performance of the model.
Let 𝐏̃ be the processed polar grid, p=⌊𝐆ij^x ⌋, q=⌊𝐆ij^y ⌋, we can parametrize the slicing process of polar grid with the incorporation of a learnable linear combination module as follows:
𝐔̃_ij = ∑_n=p^p+1∑_m=q^q+1𝐏̃_nm𝐋_ij[n-p][m-q],
where 𝐋∈ℝ^H× W × 2 ×2 is a parameter tensor that can be learned during the network training.
With Eq. <ref>, the slicing process transcends the constraints of bilinear sampling, allowing the network to learn how to sample the most valuable information from each of the four cells originally involved in the bilinear sampling process.
Indeed, during the accumulation phase, each cell in the accumulator grid becomes a weighted sum of certain cells in the input.
Then, during the slicing phase, the learnable linear combination module is employed to recover as much valuable information as possible.
This method assists in preserving the finer details of the original image.
Please see the right panel in Fig. <ref> for a visual illustration.
Geometric-Preserving Filtering
In image processing, many useful image components or manipulations are often piece-wise smooth rather than purely band-limited <cit.>.
For instance, the image segmentation produced by a neural network should be smooth within each segment, and the deformation field of image registration should be piece-wise smooth within each region <cit.>.
Therefore, these components or manipulations can be accurately approximated by specially designed low-frequency counterparts.
Additionally, if these low-frequency counterparts possess geometric structures, slicing from them yields piece-wise smooth output that preserves such structure.
Consider the third column in Fig. <ref> as an example: the polar accumulator employs nearest sampling using a grid size of (H_r,H_ψ)=(16,16).
Slicing from the accumulator grid results in piece-wise smooth regions along the angular and radial directions.
Furthermore, applying convolutional filters in the geometric transformed grid space equates to conducting convolutions with respect to the geometric structure in the image space.
This process facilitates the performance of geometric-preserving operations during the processing within the grid, between creation and slicing.
§ EXPERIMENTS
§.§ Skin Lesion Segmentation
In the first experiment, we compare the performance of our DAGrid-based polar accumulator (DA-PA) with other neural network-based methods.
In the case of convolutional neural networks (CNNs), we utilize the Tiramisu network <cit.>, which comprises densely connected blocks, the All-Net <cit.> with a tailored U-Net as the backbone network for lesion segmentation, and the residual U-Net (resU-Net)<cit.>.
For transformer networks, we employ the FAT-Net <cit.> with its feature adaptive block, and a fully transformer network <cit.> designed for simultaneous skin lesion segmentation and classification.
§.§.§ Datasets and Implementation details
To evaluate the performance of the skin lesion segmentation, we use the public ISIC 2018 dataset <cit.> to compare performance of DA-CA with other CNN or transformer based networks.
ISIC 2018 dataset contains 2594 images for training, 100 images for validation, and 1000 images for testing.
Images and segmentation masks are resized to (224,224) and then translated to align their center of mass to the geometric center for both training and testing.
Random flipping, random affine transformations, and random motion are applied to augment the data.
In our implementation, we utilize the backbone network from All-Net <cit.> for skin lesion segmentation learning.
To integrate the polar accumulator, several convolution blocks are introduced prior to the image transformation by the polar accumulator.
This transformation is followed by the backbone network.
The output from the backbone network is then sliced back to the image space, followed by a few convolution blocks that output the segmentation logits.
For the polar grid accumulation, we employ bilinear sampling, and for polar grid slicing, we use parametrized sampling.
We set H_r=64 and H_ψ=64.
For additional information regarding the network architecture and training details, please refer to the supplementary materials.
We evaluate the performance of each method using the average Dice score, Precision, Sensitivity and Jaccard Index.
The higher of these scores indicate better performance.
In addition, we use DA-CA as base model to compute ratios of FLOPs and parameter size.
All ratios are computed by comparing the target model to the base model.
Table <ref> shows the ratios computed using an input tensor size of 1× 3 × 224 × 224.
§.§.§ Results
Table <ref> illustrates that DA-PA has outperformed other methods in terms of both the averaged Dice score and the Jaccard Index.
Notably, DA-PA improves upon All-Net, the best CNN, by 3.8% in Dice score and 5.8% in the Jaccard Index. This improved performance is achieved despite DA-PA using the same backbone network as All-Net. Additionally, DA-PA's configuration, which includes a few added convolution blocks before and after the backbone and grid sizes H_r=64 and H_ψ=64, results in nearly the same parameter size as All-Net, but with a 49% reduction in FLOPs.
This is due to DA-PA's smaller input size to the backbone network, further enhancing its efficiency.
Furthermore, DA-PA outperforms the top-performing transformer network, FAT-Net, by improving the Dice score by 1.0% and the Jaccard Index by 2.3%, while concurrently achieving substantial reductions in FLOPs (96.8%) and network parameter size (70.8%).
The superior performance of DA-PA can be attributed to its effectiveness in capturing the inherent characteristics of skin lesions: they typically exhibit different textures from their surroundings and generally display a shape that radiates from the center towards the periphery.
This allows the low frequency components in the polar grid to adequately approximate the lesion geometry with less consideration of the texture.
Moreover, a smaller polar grid size provides a larger receptive field and broader segments along the angular and radial directions in the image space, resulting in enhanced accuracy and efficiency.
For qualitative results and ablation study, please refer to the supplementary materials.
§.§ Cardiac Image Registration
In the second experiment, we compare the performance of the DAGrid-based circular accumulator (DA-CA) with both traditional and deep learning-based registration methods.
For traditional methods, we use B-splines registration (with a maximum of 1000 iteration steps and 1000 random points sampled per iteration), available in SimpleElastix <cit.>.
We also apply Fast Symmetric Forces Demons <cit.> (with 100 iterations at standard deviations of 1.0) available in SimpleITK, and Symmetric Normalisation (SyN, using 3 resolution levels, with 100, 80, 60 iterations respectively) in ANTS <cit.> as the baseline methods.
In terms of deep learning-based approaches, we compare DA-CA with the VoxelMorph (VM) method <cit.>.
§.§.§ Datasets and Implementation details
In this study, we utilize cine-MR images from the Automatic Cardiac Diagnosis Challenge (ACDC) dataset <cit.>.
The ACDC dataset consists of 100 subjects, each with a complete cardiac cycle of cine MR images, along with corresponding segmentation masks available for end-diastole (ED) and end-systole (ES) frames.
Our work focuses on intra-subject registration, specifically involving the left ventricle (LV), right ventricle (RV), and left ventricular mass (LVM).
We register from the end-diastole (ED) frame to the end-systole (ES) frame or vice versa within the same subject, which gives us a total of 200 registration pairs or samples.
These samples are subsequently split into training, validation, and testing sets, with 100, 50, and 50 samples respectively.
All the cardiac MR images are re-sampled to a spacing of 1.8 × 1.8 × 10.0, and then cropped to dimensions of 128 × 128 × 16 pixels.
In our implementation, we adopt the backbone networks from VM <cit.> to learn the deformation field.
However, our model has two branches: one branch is identical to VM, while the other introduces several convolution blocks before the image is transformed by the circular accumulator, which is then followed by the backbone.
The output features from these two branches are then merged to compute the final deformation field.
In the circular accumulator, we employ bilinear sampling and three radii (N=15, N=10, and N=5).
More details regarding the network architecture can be found in the supplementary materials.
We evaluate the performance of each method using the average Dice score, Dice score for each separate region (i.e., LV, LVM, and RV), and the Hausdorff Distance (HD).
A higher Dice score or a lower HD indicates better registration performance.
Additionally, following <cit.>, we calculate two clinical cardiac indices, the LV end-diastolic volume (LVEDV) and LV myocardial mass (LVMM), to assess the consistency of cardiac anatomical structures after registration.
These indices are computed based on the moving segmentation and warped moving segmentation.
For clinical indices, values closer to the reference (computed based on the mixed and fixed segmentation) are considered better.
§.§.§ Results
Qualitative Results
The qualitative comparison between our method and other approaches is depicted in Fig. <ref>.
It is evident that deep learning-based methods (VM and our DA-CA) outperform traditional methods (B-spline and Demons), in terms of the similarity between warped images and the fixed image.
It is worth noting that there is substantial distortion between the moving and fixed images due to cardiac contraction from ED to ES, which complicates the registration process.
While other methods struggle to accurately capture such distortion, DA-CA, with its ability for long-range information propagation, effectively addresses the changes around the LV.
This results in a warped moving image that is most similar to the fixed image among all the methods tested.
In addition, from the deformation field, a more realistic motion pattern across all three regions of LV, RV, and LVM can be observed from DA-CA.
Quantitative Results
The quantitative results, as presented in Table. <ref>, further highlight the superiority of our approach.
Among the traditional methods, B-spline achieves the highest Dice score, outperforming both SyN and Demons.
Nevertheless, all these methods fall short when compared to neural network-based techniques.
Among the latter, our method significantly surpasses VM (p<<0.05 in a paired T-test) in terms of Dice score (both average Dice and Dice of separate regions) and HD.
This results in a 4.4% improvement in average Dice and an 8.2% improvement in Dice of LVM.
Neural networks often struggle to learn explicit distance information <cit.>, whereas our DA-CA explicitly propagates the information from the edge of LVM to the central area of LV, leading to significant improvement in LVM registration.
Regarding clinical indices, the LVMM calculated based on the results predicted by our method shows no significant difference (p=0.44) from the reference (presented in the row of "Before Reg"), which underscores the effectiveness of our method in preserving anatomical structure post-registration.
For qualitative results from ES to ED and ablation study, please refer to the supplementary materials.
§ CONCLUSIONS
In conclusion, this paper presents the Directed Accumulator Grid (DAGrid), an enhancement of the Directed Accumulator (DA), that proves effective for dense pixel-wise mapping tasks.
Applied to two medical applications, skin lesion segmentation and cardiac image registration, DAGrid outperforms leading methods.
Networks using DAGrid show a significant reduction in network parameter size and FLOPs, while improving the Dice score and Jaccard Index for skin lesion segmentation.
In cardiac image registration, we also observed improved registration accuracy.
We believe the potential of DAGrid extends beyond these applications, suggesting its utility in a broader range of dense prediction tasks in medical imaging and beyond.
§ APPENDIX
The appendix section provides a discussion on the limitations of our study and an extensive exploration of our methodologies.
It offers further insights into the training process, network architecture, and an ablation study, particularly focusing on the application of DA-PA and DA-CA in skin lesion segmentation and cardiac image registration.
Moreover, the appendix showcases an extensive range of qualitative results from these tasks, serving to further validate our findings and contribute to the overall understanding of our work's implications.
§.§ Limitations of the Study
There are three primary limitations identified in our study.
Firstly, the application scope of DAGrid in this study is confined to skin lesion segmentation and cardiac image registration.
This covers only a small segment of potential medical imaging applications that require dense predictions.
Furthermore, the circular accumulator and polar accumulator currently only represent circular geometries.
There exists a multitude of geometric structures within the field of medical imaging that are yet to be explored and leveraged.
Secondly, our approaches for both skin lesion segmentation and cardiac image registration rely solely on 2D feature maps.
While this is suitable given that skin lesions are a 2D problem and cardiac image registration, with its thick slice images, is suited to 2D, it leaves room for further exploration in 3D space.
We aim to undertake more investigations to model geometric structures in 3D space in the future.
Lastly, the DAGrid, similar to convolution or grid sampling, is a fundamental element within a neural network.
The seamless integration of this element into any network framework is vital, an aspect that our study did not delve into.
Additionally, we also need to study further on how to connect DAGrid with other transformations such as Radon Transform and Fourier Transform, and devise superior slicing modules.
§.§ Skin Lesion Segmentation
In this section, we delve deeper into our process, beginning with the specifics of the training and network architecture. We then present an ablation study, followed by a discussion of qualitative results.
§.§.§ Implementation Details
Data augmentation techniques such as flipping were implemented randomly either vertically or horizontally.
Affine transformations were carried out with a variable scale ranging from 0.95 to 1.05, accompanied by a random rotation degree between -5^∘ and 5^∘.
Furthermore, we employed a motion transformation using a random rotation within the range of -10^∘ to 10^∘ and random translation of up to 10 pixels.
The resulting transformed images following the application of either affine or motion transformations were obtained through linear interpolation.
We performed all analyses using Python 3.7.
Our network models, built with the PyTorch library <cit.>, were trained on a machine equipped with two Nvidia Titan XP GPUs.
The Adam optimizer <cit.> was utilized with an initial learning rate of 0.001, and a multi-step learning rate scheduler set at milestones of 50%, 70%, and 90% of the total epochs, respectively, was employed for network training.
We used a mini-batch size of 24 for training, and the training was stopped after 70 epochs.
§.§.§ Network Architecture
We adopted All-Net <cit.> as the backbone network for our skin lesion segmentation tasks.
To effectively incorporate the polar accumulator (PA) and polar sampling (PS), we deployed three convolution blocks prior to the backbone network.
Each of these blocks consists of a 3× 3 convolution, a subsequent batch normalization <cit.>, and a ReLU activation function.
Following this, either PA, PS, or a combination of both, is applied to the output feature map derived from these blocks.
The transformed feature map is then fed into the backbone network for further processing.
The feature map obtained from the backbone network undergoes slicing, which could be either bilinear or parametric.
This is succeeded by an additional three 3× 3 convolution blocks, and eventually, a 1×1 convolution operation is performed to generate the logits.
Please refer to Fig. <ref> for a visual illustration.
§.§.§ Qualitative Results
We present qualitative results of DA-PA in comparison with other methods in Fig. <ref>.
Visualized results from All-Net and FAT-Net are included, each representing the best CNN and transformer-based methods, respectively.
Across all three examples, we can observe from the images that the segmentation contour of DA-PA adheres more closely to the ground-truth segmentation contour.
In the top row, both FAT-Net and All-Net under-segment the large lesion, a consequence of an intensity change within the lesion close to its boundary.
In the middle row, both FAT-Net and All-Net over-segment the lesion.
This over-segmentation occurs because the appearance of the lesion and background is strikingly similar in this case.
In the third row, both FAT-Net and All-Net are influenced by the messy hair scattered across and around the lesion, resulting in a non-smooth segmentation boundary.
§.§.§ Ablation Study
We demonstrate the efficacy of each component within DA-PA via an ablation study, which is detailed in Table <ref>.
Through a comparison of the results obtained from models # 0, # 1, and # 2, it is evident that integrating the polar transformation into the neural network improves the skin lesion segmentation, and PA with bilinear sampling kernel outperforms PS.
Intriguingly, further performance enhancement is achieved by parametrizing the slicing process of DA-PA with a learnable linear combination module, as evinced by a comparison between models # 2 and # 3.
Given that both PS and PA transform feature maps into the polar representation in distinct ways, it seems logical to fuse them.
Nevertheless, when PS and PA are fused at a high sampling rate with (H_r,H_Ψ) = (224,224), as in model # 4, there is a degradation in performance compared to the counterpart without fusion (model # 3).
Interestingly, maintaining a relatively low sampling rate for fusion with (H_r,H_Ψ) = (64,64), as in model # 6, the performance is on par with model # 7.
This contradiction suggests that when applied at a high sampling rate, PS introduces redundant information at the center of the image and overlooks details distant from the center.
Consequently, concatenating PS and PA through the channel dimension disrupts the overall feature map, leading to a decrease in performance.
However, when applied at a low sampling rate, the redundancy is minimized and fewer details are lost because there is not as much information to begin with, resulting in a balanced performance.
Given that computing an additional PS requires virtually no additional computational effort, we opted to use the fusion setting for the rest of the models with varying sampling rates, namely (H_r,H_Ψ) = (128,128), and (H_r,H_Ψ) = (32,32).
As seen from models # 4, # 8, # 6, and # 5, the performance initially increases, peaking when (H_r,H_Ψ) = (64,64), and subsequently starts to decline in model # 5 with (H_r,H_Ψ) = (32,32).
The network processing polar representations exhibits a desirable property of being equivariant to both rotation and scale.
Consequently, as the resolution of the polar representation decreases, the receptive field centered on the lesion enlarges, capturing a more comprehensive contextual understanding, which proves advantageous for skin lesion segmentation.
Moreover, a smaller polar representation leads to larger smooth segments of the image sliced back from the polar representation, which aids in reducing noise surrounding the lesion.
However, when the resolution becomes too small, the information loss cannot be adequately compensated for, even with parametric sampling, resulting in a degradation in performance.
§.§ Cardiac Image Registration
In this section, we elaborate further on the specifics of network training, the architecture of the network, and qualitative results from End-Systole (ES) to End-Diastole (ED).
Additionally, we present an in-depth ablation study to investigate the impact of different components and parameters.
§.§.§ Implementation Details
We augmented our ACDC datasets for cardiac image registration with random flipping in the coronal and sagittal directions.
All analyses were performed using Python 3.7, and our network models, along with the comparators, were constructed using the PyTorch library <cit.>.
We trained these models on a machine equipped with two Nvidia Titan XP GPUs.
To evaluate the similarity between the registered moving image and the fixed image, we used the Normalized Cross Correlation (NCC) loss with a window size of (15,15,3) (reflecting the voxel size of 1.8× 1.8 × 10.0).
This window size reflects the image spacing of 1.8× 1.8 × 10.0.
To promote smoothness, we employed the L1 norm of the gradient of the registration field.
These two metrics were applied in a ratio of 1:0.01.
For network training, we utilized the Adam optimizer <cit.> with an initial learning rate of 1e-3.
Furthermore, we employed a polynomial learning rate scheduler with a decay rate of 0.9.
The training was conducted with a mini-batch size of 4 and was terminated after 400 epochs.
§.§.§ Network Architecture
We employed VoxelMorph (VM) <cit.> as the backbone network for our cardiac image registration task.
To integrate the circular accumulator (CA) effectively, we utilized a dual-branch network.
One branch mirrors the VM structure, while the other branch is dedicated to CA.
Importantly, these branches operate independently with their own encoder-decoder networks.
For the CA branch, we deployed five convolution blocks prior to CA application.
Each of these blocks consists of a 3× 3 × 3 convolution, followed by batch normalization <cit.>, and a ReLU activation function.
Applying CA to the output feature map of these convolutions, we obtain 𝐕_s and 𝐕_u with different radius ranges denoted by N.
To embed features more effectively, these vectors undergo a separate 1× 1 × 1 convolution block.
Next, they are concatenated along the channel dimension and fed into the respective backbone network.
The feature maps generated by the two backbone networks are further concatenated and passed through a 3× 3 × 3 convolution block to yield the deformation field.
For a visual illustration of this process, please refer to Fig. <ref>.
§.§.§ Qualitative Results
We present additional qualitative results of DA-CA in comparison with other methods in Fig. <ref>, where the moving image corresponds to the end-systolic phase (ES), while the fixed image corresponds to the end-diastolic phase (ED).
When registering from ED to ES, the left ventricular mass undergoes significant distortion, while when registering from ES to ED, the substantial distortion occurs in the left ventricle.
This makes registration from ED to ES more challenging than from ES to ED.
In ED to ES registration, the network must generate precise deformation fields for the left ventricular mass that points from the interior to the edges.
However, for ES to ED, the network simply needs to generate a deformation field that points from the exterior towards the center of the left ventricle and its surroundings.
Even with some small deviations in this context, the overall impact is negligible.
However, for ED to ES, due to the small proportion of left ventricular mass in the ED phase, even the slight deviation can lead to a failure in registration.
As seen in Fig.<ref>, the results from other methods seem more satisfactory than those in ED to ES due to the aforementioned analysis.
Nonetheless, it is still apparent from the figure that our method, which explicitly leverages long-range radial information, yields the most distinct deformation field, especially for the right ventricle.
§.§.§ Ablation Study
We demonstrate the effectiveness of each component within DA-CA via an ablation study, detailed in Table <ref>.
By comparing the results obtained from models # 0, # 1, # 2, and # 3, it becomes clear that integrating the circular accumulator with different radius ranges into the neural network improves cardiac image registration performance.
In addition, it shows that the small radius perserves more useful information than large radius.
In comparing models # 3 and # 4, we find that bilinear sampling with backpropagated gradients for sampling grids performs better than nearest sampling without gradients for the grids.
When contrasting models # 3 and # 5, we find that a network with a single branch using CA performs very well, but given the small network size of VM, adding another branch for processing raw images is our preferred choice.
Comparing # 3 and # 6, we discern that the symmetric information is critically important for cardiac image registration.
This is because at times, if the gradient of the feature map points outward from the ventricle, the long-range information is entirely lost.
Therefore, by including symmetric information, we can capture this long-range information regardless of its direction.
Lastly, we compared our method with the co-attention based registration network <cit.>, which uses co-attention to realize implicit long-range information propagation.
By comparing models # 3 and # 7, we find that both methods outperform the original VM network, which lacks long-range information.
However, our DA-CA, with explicit long-range information propagation along the geometric structure, outperforms the co-attention network.
§.§.§ Visualization of the CA Transformed Representation
Fig. <ref> demonstrates the CA-transformed representation with various parameters, specifically focusing on the radius range N and the inclusion of symmetric information.
This exploration is rooted in our concern that without considering symmetry, the gradient direction may inadvertently point outward from the ventricle, leading to a potential loss of information.
To investigate this, we manually created two cases, illustrated in the first two rows of Fig. <ref>.
Here, the sampling grids, represented by image gradients, have opposite directions.
Let 𝒢_1 = 𝐆[k]=(𝐆^x[k], 𝐆^y[k]) |k ∈ℤ^+, 1 ≤ k ≤ N be the sampling grids of the first row.
The sampling grids for the second row then mirror those of the first row, following 𝒢_2 = 𝐆[k]=(-𝐆^x[k], -𝐆^y[k])| k ∈ℤ^+, 1 ≤ k ≤ N.
As observed in Fig. <ref>, when the information points to the correct direction, it converges towards the ventricle.
In contrast, when pointing in the incorrect direction, the information tends to disperse outwards.
By employing a symmetric formulation, as illustrated in the third row of Fig. <ref>, we can avoid such errors.
This is particularly crucial because during convolution, the neural network is unaware of the ultimate direction of the gradients.
|
http://arxiv.org/abs/2306.05261v1
|
20230608150204
|
Representing and Learning Functions Invariant Under Crystallographic Groups
|
[
"Ryan P. Adams",
"Peter Orbanz"
] |
stat.ML
|
[
"stat.ML",
"cond-mat.mtrl-sci",
"cs.LG"
] |
=1
[figure]font=footnotesize,labelfont=footnotesize
§
[hang].5em
§.§.§
§.§
..5em[]
§.§.§
[runin].0em[]
[spaceabove=1em,spacebelow=1em,headfont=,bodyfont=]problemstyle
[style=problemstyle,numbered=no,name=Problem]problem
abbrvnat
[itemize]leftmargin=1.5em
#1
to
to
by -2
==0bp
#1
0.95 0.95 0.95 rg
width
height
depth 0 0 0 rg
=0bp
to
amargincounter
plain
[postheadspace=.4em,headfont=,bodyfont=,spaceabove=8pt,
spacebelow=10pt]basic
basic
[style=basic,name=Theorem]theorem
[style=basic,sibling=theorem,name=Fact]fact
[style=basic,sibling=theorem,name=Lemma]lemma
[style=basic,sibling=theorem,name=Proposition]proposition
[style=basic,sibling=theorem,name=Corollary]corollary
definition
definition[theorem]Definition
example[theorem]Example
[style=basic,numbered=no,name=Sampling Lemma]samplinglemma
[style=definition,name=Remark,sibling=theorem]remark
[postheadspace=1em,
mdframed=backgroundcolor=gray!10!white,
hidealllines=true,
innertopmargin=4pt,
innerbottommargin=4pt,
innerleftmargin=7pt,
skipabove=8pt,
skipbelow=10pt,
nobreak=false
]grayboxed
[style=plain,qed=◃]auxtheorem
[style=grayboxed,sibling=auxtheorem]algorithm
[style=grayboxed,name=Algorithm,sibling=auxtheorem]algorithmdash
=0.33333=0pt
...
-0.2
calc
decorations.markings,decorations.pathreplacing
mybraces=[mirrorbrace/.style=
decoration=brace, mirror,
decorate,brace/.style=
decoration=brace,
decorate]
5cm
6cm
References
proplist
proselist
proofstep
proof
Crystallographic groups describe the symmetries of
crystals and other repetitive structures encountered in nature and the sciences.
These groups include the wallpaper and space groups.
We derive linear and nonlinear representations of functions that are (1) smooth and
(2) invariant under such a group.
The linear representation generalizes the Fourier basis to crystallographically invariant
basis functions. We show that such a basis exists for each crystallographic group,
that it is orthonormal in the relevant Ł_2 space, and recover the standard Fourier
basis as a special case for pure shift groups. The nonlinear representation
embeds the orbit space of the group into a finite-dimensional Euclidean space.
We show that such an embedding exists for every crystallographic group, and that it factors functions
through a generalization of a manifold called an orbifold.
We describe algorithms that, given a standardized description of the group,
compute the Fourier basis and an embedding map. As examples,
we construct crystallographically invariant neural networks, kernel machines,
and Gaussian processes.
Representing and Learning
Functions Invariant Under Crystallographic Groups
Ryan P Adams and Peter Orbanz
July 31, 2023
=============================================================================
§ INTRODUCTION
Among the many forms of symmetry observed in nature, those that arise from repetitive spatial patterns
are particularly important. These are described by sets of transformations
of Euclidean space called crystallographic groups <cit.>.
For example, consider a problem in materials science, where atoms are arranged in a crystal lattice.
The symmetries of the lattice are then characterized by a crystallographic group .
Symmetry means that, if we apply one of the transformations in
to move the lattice—say to rotate or shift it—the transformed lattice is indistinguishable from the
untransformed one.
In such a lattice, the Coulomb potential acting on any single electron due to a collection of fixed nuclei
does not change under any of the transformations in <cit.>.
If we think of the potential field as a function
on ℝ^3, this is an example of a -invariant function, i.e., a function whose values do
not change if its arguments are transformed by elements of the group.
When solving the resulting Schrödinger equation for single particle states, members of the group commute with the Hamiltonian, and quantum observables are again -invariant <cit.>.
A different example are ornamental tilings on the walls of the Alhambra, which, when regarded
as functions on ℝ^2, are invariant under two-dimensional crystallographic groups
<cit.>.
The purpose of this work is to construct smooth invariant functions for any given crystallographic
group in any dimension.
For finite groups, invariant functions can be constructed easily by summing over all group elements;
for compact infinite groups, the sum can be replaced by an integral. This and related ideas have
received considerable attention in machine learning
<cit.>.
Such summations are not possible for crystallographic groups, which are neither finite nor compact,
but their specific algebraic and geometric properties
allow us to approach the problem in a different manner.
We postpone a detailed literature review to <ref>,
and use the remainder of this section to give a non-technical sketch of our results.
§.§ A non-technical overview
This section sketches our results in a purely heuristic way; proper
definitions follow in <ref>.
Crystallographic symmetry.
Crystallographic groups are groups that tile a Euclidean space ℝ^n with a convex shape.
Suppose we place a convex polytope Π in the space ℝ^n, say a square or a rectangle in the plane.
Now make a copy of Π, and use a transformation ϕ:ℝ^2→ℝ^2 to
move this copy to another location.
We require that ϕ is an isometry, which means it may shift, rotate or flip Π, but does not
change its shape or size.
Here are some examples, where the original shape Π is marked in red:
at (0,0)
< g r a p h i c s >
;
at (4.5,0)
< g r a p h i c s >
;
at (9,0)
< g r a p h i c s >
;
at (0,-1.65) ;
at (4.5,-1.65) ;
at (9,-1.65) ;
The descriptors , , and follow the naming standard for groups
developed by crystallographers <cit.>, and the symbol “F” is inscribed
to clarify which transformations are used.
The transformations in these examples are horizontal and vertical shifts (in ), rotations around the
corners of the rectangle in (), and reflections about its edges ().
Suppose we repeat one of these processes indefinitely so that the copies of Π cover the entire plane
and overlap only on their boundaries. That requires a countably infinite number of
transformations, one per copy. Collect these into a set .
If this set forms a group, this group is called crystallographic.
Such groups describe all possible symmetries of crystals, and have been thoroughly studied in
crystallography. For each dimension n, there is—up to a natural notion of isomorphy that we explain in <ref>—only
a finite number of crystallographic groups: Two on ℝ, 17 on ℝ^2,
230 on ℝ^3, and so forth.
Those on ℝ^2 are also known as wallpaper groups, and those on ℝ^3 as space groups.
The objects of interest.
A function f is invariant under if it satisfies
f(ϕ x) = f(x) for all ϕ∈ and all x∈ℝ^n .
A simple way to construct such a function is to start with a tiling as above, define a function
on Π, and then replicate it on every copy of Π. Here are two examples on ℝ^2,
corresponding to (ii) and (iii) above, and an example on ℝ^3:
at (0,0)
< g r a p h i c s >
;
at (5,0)
< g r a p h i c s >
;
at (10,0)
< g r a p h i c s >
;
at (0,-2) ;
at (5,-2) ;
at (11.5,-2) ;
However, as the examples illustrate,
functions obtained this way are typically not continuous.
Our goal is to construct smooth invariant functions, such as these:
at (0,0)
< g r a p h i c s >
;
at (5,0)
< g r a p h i c s >
;
at (10,0)
< g r a p h i c s >
;
at (0,-2) ;
at (5,-2) ;
at (11.5,-2) ;
We identify two representations of such functions, one linear and one nonlinear.
Working with either representation algorithmically requires a data structure representing the invariance
constraint. We construct such a structure, which we call an orbit graph,
in <ref>. This graph is constructed from a description of the group
(which can be encoded as a finite set of matrices) and of Π (a finite set of vectors).
Linear representations: Invariant Fourier transforms.
We are primarily interested in two and three dimensions, but a one-dimensional example
is a good place to start: In one dimension, a convex polytope is always an interval, say Π=[0,1]. If we choose
as the group ℤ of all shifts of integer length, it tiles the line ℝ with Π.
In this case, an invariant function is simply a periodic function with period 1. Smooth periodic
functions can be represented as a Fourier series,
f(x) = _i=0^∞a_icos(ix2π)+b_isin(ix2π)
for sequences of scalar coefficients a_i and b_i.
Note each sine and cosine on the right is -invariant and infinitely often differentiable.
Now suppose we abstract from the specific form of these sines and cosines, and only
regard them as -invariant functions that are very smooth.
The series representation above then has the general form
f(x) = _i=0^∞c_i_i(x) ,
where the _i are smooth, -invariant functions that depend only on and Π,
and the c_i are scalar coefficients that depend on f. (In the Fourier series, _i is
a cosine for odd and a sine for even indices.) In <ref>, we obtain generalizations of
this representation to crystallographically invariant functions. To do so,
we observe that the Fourier basis can be derived as the set of eigenfunctions of the Laplace operator:
The sine and cosine functions above are precisely those functions :ℝ→ℝ that solve
-Δ = λ
subject to is periodic with period 1
for some λ≥ 0. (The negative sign is chosen to make the eigenvalues λ non-negative.)
The periodicity constraint is equivalent to saying that is invariant
under the shift group =ℤ. The corresponding problem for a general crystallographic group on ℝ^n is hence
-Δ = λ
subject to = ∘ϕ for all ϕ∈ .
<ref> shows that this problem has solutions for any dimension n, convex polytope Π⊂ℝ^n, and crystallographic
group that tiles ℝ^n with Π. As in the Fourier case, the solution functions _1,_2,… are very smooth.
If we choose Π⊂ℝ^2 as the square [0,1]^2 and as the group ℤ^2 of discrete horizontal and vertical shifts—that is, the two-dimensional analogue of the example above—we recover the
two-dimensional Fourier transform. The function _0 is constant; the functions _1,…,_5 are shown in
<ref>.
If the group also contains other transformations, the basis looks less familiar.
These are the basis functions _1,…,_5 for a group () containing shifts and rotations of order three:
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
The same idea applies in any finite dimension n. For n=3, the _i can be visualized
as contour plots. For instance, the first five non-constant basis elements for a specific three-dimensional
group, designated by crystallographers, look like this:
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
Our results show that any continuous invariant function can be represented
by a series expansion in functions _i. As for the Fourier transform, the functions form a orthonormal
basis of the relevant Ł_2 space. The functions _i can hence be seen as a generalization of the
Fourier transform from pure shift groups to crystallographic groups. All of this is made precise in <ref>.
Nonlinear representations: Factoring through an orbifold.
The second representation, in <ref>, generalizes an idea of David MacKay <cit.>, who constructs periodic
functions on the line as follows: Start with a continuous function h:ℝ^2→ℝ.
Choose a circle of circumference 1 in ℝ^2, and restrict h to the circle. The restriction is still
continuous. Now “cut and unfold the circle with h on it” to obtain a function on the unit interval.
Since this function takes the same value at both interval boundaries, replicating it by shifts of integer length
defines a function on ℝ that is periodic and continuous:
at (0,0)
< g r a p h i c s >
;
at (3.5,0)
< g r a p h i c s >
;
at (9,0)
< g r a p h i c s >
;
at (0,-2) function h on ℝ^2;
at (3.5,-2) restrict h to circle;
at (9,-2) unfold circle and replicate;
More formally, MacKay's approach constructs a function
ρ:ℝ→circle⊂ℝ^2 such that
f is continuous and periodic on ℝ ⇔
f = h∘ρ for some continuous h:ℝ^2→ℝ .
We show how to generalize this construction to any finite dimension n, any crystallographic
group on ℝ^n, and any convex polytope with which tiles the space:
For each and Π, there is a continuous, surjective map
ρ:ℝ^n→Ω for some finite N≥ n and a compact set Ω⊂ℝ^N
such that
f is continuous and invariant ⇔
f = h∘ρ for some continuous h:ℝ^N→ℝ .
This is <ref>.
<ref> shows how to compute a representation of ρ using multidimensional scaling.
The set Ω can be thought of as an n-dimensional surface in a higher-dimensional space ℝ^N.
If contains only shifts, this surface is completely smooth, and hence a manifold. That is the
case in MacKay's construction, where Ω is the circle, and
the group on ℝ^2, for which Ω is the torus shown on the left:
at (0,0)
< g r a p h i c s >
;
at (6,0)
< g r a p h i c s >
;
For most crystallographic groups, Ω is not a manifold, but
rather a more general object called an orbifold. The precise definition (see <ref>) is somewhat technical, but
loosely speaking, an orbifold is a surface that resembles a manifold almost everywhere, except at a small number of
points at which it is not smooth.
That is illustrated by the orbifold on the right, which represents a group containing rotations, and has several “sharp corners”.
Applications I: Neural networks.
We can now define -invariant models by factoring through ρ.
To define an invariant
neural network, for example, start with a continuous neural network h_θ:ℝ^N→ Y with weight vector θ and some
output space Y. Then ρ∘ h_θ is a continuous and invariant neural network ℝ^n→ Y.
Here are examples for three groups (, , and ) on ℝ^2, with three hidden layers and randomly generated weights:
at (0,0)
< g r a p h i c s >
;
at (5,0)
< g r a p h i c s >
;
at (10,0)
< g r a p h i c s >
;
Applications II: Invariant kernels.
We can similarly define -invariant reproducing kernels on ℝ^n, by starting with a
kernel κ̂ on ℝ^N and defining a function on ℝ^n as
κ(x,y) = κ̂∘(ρ⊗ρ)(x,y) = κ̂(ρ(x),ρ(y)) .
This function is again a kernel. In <ref>, we
show that its reproducing kernel Hilbert space consists of continuous -invariant
functions on ℝ^n.
We also show that, even though ℝ^n is not compact, κ behaves essentially like a kernel on a
compact domain (<ref>). In particular,
it satisfies a Mercer representation and a compact embedding property, both of which usually require
compactness. This behavior is specific to kernels invariant under crystallographic groups, and
does not extend to more general groups of isometries on ℝ^n.
Applications III: Invariant Gaussian processes.
There are two ways in which a Gaussian process (GP) can be invariant under a group: A GP is a distribution on
functions, and we can either ask for each function it generates to be invariant, or only require that its distribution is
invariant (see <ref> for definitions). The former implies the latter. Both types of processes can be constructed
by factoring through an orbifold:
Suppose we start with a kernel κ̂ (a covariance function) and a real-valued function μ̂ (the mean function),
both defined on ℝ^N. If we then generate a random function F on ℝ^n as
F := H∘ρ where H ∼ GP(μ̂,κ̂) ,
the function F is -invariant with probability 1.
The following are examples of such random functions, rendered as contour plots with non-smooth colormaps.
at (0,0)
< g r a p h i c s >
;
at (3.5,0)
< g r a p h i c s >
;
at (7,0)
< g r a p h i c s >
;
at (10.5,0)
< g r a p h i c s >
;
If we instead generate F as
F ∼ GP(μ,κ) where μ := μ̂∘ρ and κ := κ̂∘(ρ⊗ρ) ,
the distribution of F is -invariant. See <ref>.
Properties of the Laplace operator.
<ref> studies differentials and Laplacians of crystallographically invariant functions
f:ℝ^n→ℝ.
The results are then used in the proof of the Fourier representation.
Consider a vector field F, i.e., a function F:ℝ^n→ℝ^n. An example
of such a vector field is the gradient F=∇ f. <ref> shows that
the gradient transforms under elements ϕ of 𝔾 as
∇ f(ϕ x) = (linear part of ϕ)·∇ f
or abstractly
F∘ϕ = (linear part of ϕ)∘ F .
<ref> shows that, for any vector field F that transforms in this way,
the total flux through the boundary of the polytope Π vanishes,
_∂ΠF(x)^(normal vector of ∂Π at x)dx = 0 .
We can combine this fact with a result from the theory of partial differential equations, the
so-called Green identity, which decomposes the Laplacian on functions on Π as
-Δ f = self-adjoint component on interior of Π - correction term on ∂Π .
<ref> makes the statement precise. Using the fact that the flux vanishes,
we can show that the correction term on ∂Π vanishes, and from that deduce that the
Laplace operator on invariant functions is self-adjoint (<ref>).
That allows us to draw on
results from the spectral theory of self-adjoint operators to solve (<ref>).
Background and reference results.
Since our methods draw on a number of different fields,
the appendix provides additional background on groups of isometries (App. <ref>),
functional analysis (App. <ref>), and orbifolds (App. <ref>),
and spectral theory (App. <ref>).
§ PRELIMINARIES: CRYSTALLOGRAPHIC GROUPS
Throughout, we consider a Euclidean space ℝ^n, and write d_n for Euclidean distance in n dimensions.
Euclidean volume (that is, Lebesgue measure on ℝ^n) is denoted _n.
As we work with both sets and their boundaries, we must carefully distinguish
dimensions: The span of a set A⊂ℝ^n is the smallest affine subspace that contains it.
We define the dimension and relative interior of A as
A := span A
and
A^∘ := largest subset of A that is open in span A .
The boundary of A is the set ∂ A:=A∖ A^∘.
If A has dimension k<n, then _k(A) denotes Euclidean volume in span A.
For example: If A⊂ℝ^3 is a closed line segment, then A=1, and
_1(A) is the length of the line segment, whereas _3(A)=_2(A)=0.
Taking the relative interior A^∘ removes
the two endpoints, whereas interior of A in ℝ^3 is the empty set.
(No such distinction is required for the closure A̅,
since A is closed in span A if and only if it is closed in ℝ^n.)
§.§ Defining crystallographic groups
Consider a group
of isometries of ℝ^n. (See <ref> for a
brief review of definitions.) Every isometry ϕ of ℝ^n is of the form
ϕ x = A_ϕ x+b_ϕ for some orthogonal n× n matrix A_ϕ and some b_ϕ∈ℝ^n .
Let M⊂ℝ^n be a set. We say that
tiles the space ℝ^n with M
if the image sets ϕ M completely cover the space so that only their boundaries overlap:
_ϕ∈ϕ M = ℝ^n
and ϕ M∩ψ M ⊂ ∂(ϕ M)
whenever ϕ≠ψ .
Each set ϕ M is a tile,
and the collection M:=ϕ M|ϕ∈ is a tiling of ℝ^n.
By a convex polytope, we mean the convex hull of a finite set of points
<cit.>. Let ⊂ℝ^n be an n-dimensional convex polytope.
The boundary ∂
consists of a finite number of (n-1)-dimensional convex polytopes, called the facets
of . Thus, if tiles ℝ^n with Π, only points on
facets are contained in more than one tile.
A crystallographic group is a group of isometries that
tiles ℝ^n with an n-dimensional convex polytope .
The polytope is then also called a fundamental region (in geometry) or an
asymmetric unit (in materials science) for .
This definition of crystallographic groups
differs from those given in the literature, but we clarify in
<ref> that it is equivalent.
§.§ Basic properties
Some properties of can be read right off the definition:
Since tiles the entire space with a set of finite diameter, we must have ||=∞.
Since Π is n-dimensional and convex, it contains an open metric ball of positive radius.
Each tile contains a copy of this ball, and these copies do not overlap. It follows that
d(ϕ(x),ψ(x)) > ε for all distinct ϕ,ψ∈ and all x∈Π^∘ .
A group of isometries that satisfies (<ref>)
for some ε>0 is called discrete, in contrast to groups which contain, e.g., continuous rotations.
Discreteness implies is countable, but not all countable groups of isometries are discrete
(the group ℚ^n of rational-valued shifts is a non-example).
In summary, every crystallographic group is
an infinite, discrete (and hence countable) subgroup of the Euclidean group on ℝ^n.
Suppose we choose one of the tilings in <ref>,
and rotate or shift the entire plane with the tiling on it.
Informally speaking, that changes the tiling, but not the tiling mechanism, and it is natural to consider
the two tilings isomorphic. More formally,
two crystallographic groups and ' are isomorphic if there is
an orientation-preserving, invertible, and affine (but not necessarily isometric) map
γ:ℝ^n→ℝ^n such that
'=γ, where γ:=γϕγ^-1 | ϕ∈.
[<cit.>]
Up to isomorphy, there are only finitely many crystallographic groups on ℝ^n for each n∈ℕ.
Specifically, there 17 such groups for n=2, and 230 for n=3.
§ PRELIMINARIES: INVARIANT FUNCTIONS
A function f:ℝ^n→, with values in some set , is
ϕ-invariant if it satisfies
f(ϕ x)=f(x) for all x∈ℝ^n or in short f∘ϕ=f .
It is -invariant if it is ϕ-invariant
for all ϕ∈.
We are specifically interested in -invariant functions that
are continuous, and write
(M):=f:ℝ^n→ℝ | f continuous and :=f∈(ℝ^n) |
f is -invariant .
More generally, a function
f:(ℝ^n)^k→ is -invariant in each argument if
f(ϕ_1x_1,…,ϕ_kx_k) = f(x_1,…,x_k) for all ϕ_1,…ϕ_k∈ and x_1,…,x_k∈ℝ^n .
§.§ Tiling with functions
To construct a -invariant function, we may start with a function h on Π
and “replicate it by tiling”. For that to be possible,
h must in turn be the restriction of a -invariant function to
Π. It must then satisfy h(ϕ x)=h(x) if both ϕ x and x are in Π.
We hence define the relation
x ∼ y
:⟺
x,y∈ and y=ϕ(x) for some ϕ∈∖ .
We note immediately that x∼ y implies each point is also contained in
an adjacent tile, so both must be on the boundary ∂Π of Π.
The requirement
h(x) = h(y) whenever x∼ y
is therefore a periodic boundary condition.
If it holds, the function
f(x) := h(ϕ^-1 x) for x∈ϕ() and each ϕ∈
is well-defined on ℝ^n, and is -invariant.
Conversely, every -invariant function f can be obtained this way (by choosing h as the
restriction f|_Π).
Informally, (<ref>) says that we stitch together function segments on tiles that are all copies of
h, and these segments overlap on the tile boundaries. The boundary condition ensures
that wherever such overlaps occur, the segments have the same value, so that
(<ref>) produces no ambiguities.
The special case of (<ref>) for pure shift groups—where
A_ϕ is the identity matrix for all ϕ∈—is known as a
Born-von Karman boundary condition (e.g., <cit.>).
§.§ Orbits and quotients
An alternative way to express invariance is as follows: A function is -invariant
if and only if it is constant on each set of the form
(x) := ϕ x | ϕ∈ for each x∈ℝ^n .
The set (x) is called the orbit of x.
We see immediately that each orbit of a crystallographic group is countably infinite, but
locally finite: The definition of discreteness in (<ref>)
implies that every bounded subset of ℝ^n
contains only finitely many points of each orbit.
We also see that each point x∈ℝ^n is in one and only one orbit, which means
the orbits form a partition of ℝ^n. The assignment x↦(x) is
hence a well-defined map
q(x) := (x)
with image ℝ^n/ := q(ℝ^n) = (x) | x∈ℝ^n .
The orbit set ℝ^n/ is also called the quotient set or just the quotient of
, and q is called the quotient map (e.g., <cit.>).
Since the orbits are mutually disjoint, we can informally think of q as collapsing each
orbit into a single point, and ℝ^n/ is the set of such points.
Quotient spaces are abstract but useful tools for expressing invariance properties:
For any function f:ℝ^n→ℝ, we have
f is -invariant ⟺
f=f̂∘ q
for some function f̂:ℝ^n/→ℝ ,
since each point of ℝ^n/ represents an orbit and f is invariant iff it is constant on orbits.
We can also use the quotient to express continuity, by equipping it with a topology that satisfies
f∈_ ⟺
f=ĥ∘ q
for some continuous ĥ:ℝ^n/→ℝ .
There is exactly one such topology, called the quotient topology in the literature. Its definition
can be made more concrete by metrizing it:
[see <cit.>, Theorem 7.7]
If is crystallographic, the function
d_(ω_1,ω_2) := infd(x,y) | x∈ω_1,y∈ω_2 for ω_1,ω_2∈ℝ^n/
is a valid metric on ℝ^n/, and it metrizes the quotient
topology. A subset U⊂ℝ^n/ is open if and only
if its preimage q^-1U is open in ℝ^n.
Since is discrete, the infimum in d_ is a minimum.
The distance of two orbits (considered as points in ℝ^n/)
is hence the shortest Euclidean distance between points in these orbits (considered as sets
in ℝ^n), see <ref> (right).
If x and y are points in the polytope Π, we have
x∼ y ⟺ d_((x),(y)) = 0 .
Informally speaking, d_ implements the periodic boundary condition (<ref>).
The metric space (ℝ^n/,d_) is also called the quotient space or
orbit space of . A very important property of crystallographic groups is that they
have compact quotient spaces:[<cit.>]
If a discrete group of isometries tiles ℝ^n with a set M,
the quotient space (ℝ^n/,d_) is homeomorphic to the quotient space M/.
If is crystallographic and tiles
with a convex polytope, then (ℝ^n/,d_) is compact.
§.§ Transversals and projections
Since orbit spaces are abstract objects, we can only work with them implicitly.
One way to do so is by representing each orbit by one of its points in ℝ^n.
A subset of ℝ^n that contains exactly one point of each orbit is called a
transversal. In general, transversals can be exceedingly complex sets <cit.>,
but crystallographic
groups always have simple transversals. <ref> in the next section
constructs a transversal explicitly.
In the following, we will always write Π̃ to mean
Π̃ := a transversal contained in Π computed by <ref>.
Given such a transversal, we can define the projector p:ℝ^n→Π as
p(x) := the unique element of (x)∩Π .
If we think of each point in Π̃ as a concrete representative of an element of ℝ^n/,
then p is similarly a concrete representation of the quotient map q, and we can translate
the identities above accordingly:
The projector is by definition -invariant,
since we can write f in (<ref>) as f=h∘ p. That shows
f:ℝ^n→ℝ is -invariant ⟺
f = h∘ p for some h:→ℝ satisfying (<ref>) .
Although p is not continuous as a function ℝ^n→Π, continuity only fails at the boundary, and
p behaves like a continuous function when composed with h:
Let h:Π→ be a continuous function with values in a topological space .
If h satisfies (<ref>), then h∘ p is a continuous -invariant function
ℝ^n→. It follows that
f∈_ ⟺
f = h∘ p for some continuous h:→ℝ satisfying (<ref>) .
Since p exists for any choice of and Π, and since it can be evaluated algorithmically,
we have hence reduced the problem of constructing continuous invariant functions to the problem
of finding functions that satisfy the periodic boundary condition (<ref>).
§ TAKING QUOTIENTS ALGORITHMICALLY: ORBIT GRAPHS
To work with invariant functions computationally, we
must approximate the quotient metric. We do so using a data structure that we call an orbit graph,
in which two points are connected if their orbits are close to each other. More formally, any undirected graph
is a metric space when equipped with path length as distance. The metric space defined by the graph
𝒢 below discretizes the metric space (ℝ^n/,d_).
To define 𝒢, fix constants ε,δ>0.
A finite set Γ is an ε-net in Π if each point lies within distance ε of Γ,
Γ is an ε-net :⇔ for each x∈Π there exists z∈Γ such that d_n(x,z) < ε ,
see e.g., <cit.>.
If Γ is an ε-net in Π, we call the graph
𝒢=𝒢(ε,δ)=(Γ,E)
where E:=(x,y) | d_((x),(y)<δ
an orbit graph for and Π.
§.§ Computing orbit graphs
Algorithmically, an orbit graph can be constructed as follows: Constructing an ε-net is a standard problem in
computational geometry and can be solved efficiently (e.g., <cit.>).
Having done so, the problem we have to solve is:
Given points x,y∈Π, determine d_((x),(y)) .
Since Π is a polytope, its diameter
diam(Π) := maxd_n(x,y) | x,y∈Π < ∞
can also be evaluated computationally. By definition of d_, we have
d_((x),(y))
= min_ϕ∈d_n(x,ϕ y)
≤
d_n(x,y)
≤ diam(Π) .
That shows the minimum is always attained for a point ϕ y on a tile ϕΠ that lies within
distance diam(Π) of x. The set of transformations that specify these tiles is
𝒜_Π = ϕ∈ | d_n(x,ϕ z)≤diam(Π) for some z∈Π .
This set is always finite, since is discrete and the ball of radius diam(Π) is compact.
We can hence evaluate the quotient metric as
d_((x),(y))
= mind_n(x,ϕ y) | ϕ∈𝒜_Π ,
which reduces the construction of E to a finite search problem. In summary:
The construction is illustrated in <ref>.
§.§ Computing a transversal
Recall that the faces of a polytope are its vertices, edges, and so forth; the
facets are the (n-1)-dimensional faces. The polytope itself is also a face, of dimension n.
See <cit.> for a precise definition. Given Π and , we will call two
faces S and S' -equivalent if S'=ϕ S for some ϕ∈.
Thus, if S=Π, its equivalence class is Π.
If S is a facet, it is equivalent to at most one distinct facet, so its
equivalence class has one or two elements. The equivalence classes
of lower-dimensional faces may be larger—if is and Π a square,
for example, all four vertices of Π are -equivalent.
The set Π̃ constructed by <ref> is a transversal.
The relative interiors of the faces of a convex polytope are mutually disjoint and
their union is Π, so each point x∈Π is on exactly one such relative interior.
Let S be the face with x∈ S^∘, and consider any ϕ∈.
Since the tiling is exact, ϕ S is either a face of Π or ϕ S∩Π=∅.
If ϕ x∈Π, the intersection cannot be empty, so ϕ S is a face and hence
-equivalent to S.
It follows that the interior of a face of Π intersect the orbit x if and only
if it is in the equivalence class of S.
Since we select exactly one element of this class, exactly one point of x
is contained in Π̃.
§.§ Computing the projector
Since is crystallographic, it contains shifts in n linearly independent directions, and these shifts hence specify
a coordinate system of ℝ^n. More precisely: There are n elements ϕ_1,…,ϕ_n of
that (1) are pure shifts (satisfy A_ϕ_i=𝕀), (2) are linearly independent, and (3) are the shortest
such elements (in terms of the Euclidean norm of b_ϕ_i). Up to a sign, each of these elements is uniquely determined.
We refer to the vectors ϕ_1,…,ϕ_n as the shift coordinate system of .
§ LINEAR REPRESENTATION: INVARIANT FOURIER TRANSFORMS
In this section, we obtain a basis representation for invariant functions: given a
crystallographic group , we construct a sequence of -invariant functions
_1,_2,… on ℝ^n such that any -invariant continuous function
can be represented as a (possibly infinite) linear combination ∑_i∈ℕc_i_i.
If is generated by n orthogonal shifts, the functions e_i
are an n-dimensional Fourier basis.
<ref> below obtains an analogous basis for each crystallographic group .
§.§ Representation theorem
For any open set M⊆ℝ^n, we define the Laplace operator on twice differentiable
functions h:M→ℝ as
Δ h
:= ∂^2 h∂ x_1^2+…+∂^2 h∂ x_n^2 = ∇^(∇ h) .
Now consider specifically functions e:ℝ^n→ℝ.
Fix some λ∈ℝ, and consider the constrained partial differential equation
-Δ e = λ e on ℝ^n
subject to e = e∘ϕ for ϕ∈ .
Clearly, there is always a trivial solution, namely the constant function e=0.
If (<ref>) has a non-trivial solution e, we call this e a -eigenfunction and
λ a -eigenvalue
of the linear operator -Δ. Denote the set of solutions by
𝒱(λ)
:= e:ℝ^n→ℝ | e satisfies (<ref>) .
Since 0 is a solution, and any linear combination of solutions is again a solution,
𝒱(λ) is a vector space,
called the eigenspace of λ. Its dimension
k(λ) := 𝒱(λ)
is the multiplicity of λ.
Let be a crystallographic group that tiles ℝ^n with
a convex polytope Π.
Then the constrained problem (<ref>) has solutions for countably many
distinct values λ_1,λ_2,… of λ, and these values satisfy
0 = λ_1 < λ_2 < λ_3<… and λ_i ∞ .
Every solution function e is infinitely often differentiable.
There is a sequence e_1,e_2,… of solutions whose restrictions
e_1|_Π,e_2|_Π,… to Π form an orthonormal basis of the the space Ł_2(Π),
and satisfy
| j∈ℕ | e_j∈𝒱(λ_i)| = k(λ_i)
for each i∈ℕ .
A function f:ℝ^n→ℝ is -invariant and continuous if and only if
f = _i∈ℕc_ie_i
for some sequence c_1,c_2,…∈ℝ ,
where the series converges in the supremum norm.
See <ref>.
The space Ł_2(ℝ^n) contains no non-trivial -invariant functions, since
for every f∈_
f_Ł_2(ℝ^n) = _ϕ∈ f|_ϕΠ _Ł_2(ϕΠ) =
0 f=0 almost everywhere
∞ otherwise .
On the other hand, the restriction f|_Π is in Ł_2(Π), and completely determines f.
That makes Ł_2(Π) the natural Ł_2-space in the context of crystallographic invariance,
and is the reason why the restrictions e_i|_Π are used in the theorem.
Since Ł_2(ϕΠ) is isometric to Ł_2(Π) for all ϕ∈, it does not matter which tile we restrict to.
§.§ Relationship to Fourier series
The standard Fourier bases for periodic functions on ℝ^n
can be obtained as the special cases of <ref>
for shift groups: Fix some edge width c>0, and choose Π and 𝔾 as
Π = [0,c]^n
and ={x↦ x+c(i_1,…,i_n)^ | i_1,…,i_n∈ℤ} .
For these groups, all eigenvalue multiplicities are k(λ_i)=2n for each i∈ℕ.
For n=2, the group is (see <ref>). Its eigenfunctions
are shown in <ref>.
To clarify the relationship in more detail, consider the case
n=1: Since Δ is a second derivative, the functions
e(x)=cos(ν x) and e(x)=sin(ν x) satisfy
Δ e(x) = -ν^2e(x) for each ν≥ 0 ,
and are hence eigenfunctions of -Δ with eigenvalue λ=ν^2.
For this choice of Π and , the invariance constraint
in (<ref>) holds iff e(x)=e(x+c) for every x∈ℝ.
That is true iff
ν(x+c) = ν x+2π(i-1) for some i∈ℕ,
and hence λ_i = ν^2 = (2π(i-1)c)^2 .
The eigenspaces are therefore the two-dimensional vector spaces
𝒱(λ_i) = spansin(√(λ_i)),cos√(λ_i)) with
k(λ_i) = 2
for all i∈ℕ .
Any continuous function f that is -invariant (or, equivalently, c-periodic) can be expanded as
f(x) = _i∈ℕa_icos(√(λ_i)x)+b_isin(√(λ_i)x) .
In the notation of <ref>, the coefficients are
c_2i=a_i and c_2i+1=b_i, and
e_2i(x)=cos(√(λ_2i)x)
and
e_2i+1(x)=sin(√(λ_2i+1)x)
.
Note that the unconstrained equation has solutions for all λ in the uncountable set [0,∞).
The invariance constraint limits possible values to the countable set λ_1,λ_2,….
If f was continuous but not invariant, the expansion (<ref>) would hence require an integral
on the right. Since f is invariant, a series suffices.
Fourier series, in particular in one dimension, are often written using complex-valued functions as
f(x) = _i∈ℕγ_iexp(Jλ_i x) where γ_i∈ℂ and J:=√(-1) .
Since Euler's formula exp(Jx)=cos(x)+Jsin(x) shows
γ_iexp(J√(λ_i))
=
a_icos(√(λ_i)x)+b_isin(√(λ_i))
for (a_i-Jb_i)=γ_i ,
that is equivalent to (<ref>). The complex plane ℂ is not inherent to the Fourier
representation, but rather a convenient way
to parameterize the two-dimensional eigenspace 𝒱(λ_i).
For general crystallographic groups, the complex representation is less useful,
since the multiplicities k(λ_i) may not be even, as can be seen in <ref>.
§.§ Spectral algorithms
The eigenfunctions in <ref> can be
approximated by eigenvectors of a suitable graph Laplacian of the orbit graph as follows.
We first compute an orbit graph 𝒢=(Γ,E) as described in
<ref>. We weight each edge (x,y) of the graph by
w(x,y) =
exp(-d^2_((x), (y))/2ϵ^2) if (x,y)∈ E
0 otherwise .
The normalized Laplacian of the weighted graph is
L = 𝕀 - D^-1W where W_xy=w(x,y) ,
and D is the diagonal matrix containing the sum of each row of W.
See e.g., <cit.> for more on the matrix L.
Our estimates of the eigenvalues and -functions of Δ are the eigenvalues and eigenvectors of L,
λ̂_i := ith eigenvalue of L
and ê_i := ith eigenvector of L .
These approximate the spectrum of Δ in the sense that
λ_i
≈ 2ϵ λ̂_i
and
e_i(x)
≈ 2ϵ (ê_i)_x
for x∈Γ ,
see <cit.>. Once an eigenvector ê_i is computed, values of
_i at points x∉Γ can be estimated using standard interpolation methods.
Alternatively, the basis can be computed using a Galerkin approach, which
is described in <ref>. The functions in Figures <ref>, <ref>
and <ref> are computed using the Galerkin method.
The orbit graph automatically enforces the boundary condition (<ref>), since it measures distance in terms of d_.
The exception are group elements that are reflections, since these imply an additional property that the graph does not resolve:
If ϕ is a reflection over a facet S, x a point on S (and hence ϕ x=x), and f a ϕ-invariant smooth function,
we must have ∇ f(x)=-∇ f(ϕ x), and hence ∇ f=0 on S. In the parlance of PDEs, this is a Neumann boundary condition,
and can be enforced in several ways:
1) For each point x_j∈Γ that is on S, add a point x_j' to Γ and
the edge (x_j,x_j') to E. Then constrain each eigenvector e_i in <ref> to satisfy e_ij=e_ij'.
This approach is common in spectral graph theory (e.g., <cit.>).
2) Alternatively, one may symmetrize the orbit graph: For vertext x_j that is close to S, add its reflection x_j':=ϕ(x_j) to
Γ. Now construct the edge set according to d_ using the augmented vertex set, and again constrain eigenvectors to
satisfy e_ij=e_ij'.
Either constrained eigenvalue problem can be solved using techniques of <cit.>.
§ NONLINEAR REPRESENTATION: FACTORING THROUGH AN ORBIFOLD
We now generalize MacKay's construction, as sketched in the introduction, from shifts to
crystallographic groups. The construction defines a map
ρ|_Π: Π→Ω⊂ℝ^N
which in turn defines ρ:=ρ_Π∘ p:ℝ^n→Ω .
In MacKay's case, Π is an interval and Ω∈ℝ^2 a circle.
The circle can be obtained from Π by “gluing” the ends of the interval to each other.
To generalize this idea, we proceed as follows: Starting with the polytope Π,
we find any pair of points x and y on the same orbit of , and “bend” Π
so that we can glue x to y. That results in a surface Ω in ℝ^N, where N≥ n
since we have bent Π.
If we denote the point on Ω that corresponds to x∈Π by
ρ|_Π(x), we obtain the maps above. We first show how to implement this construction numerically,
and then consider its mathematical properties.
In mathematical terms, the surface Ω is an orbifold, a concept that generalizes
the notion of a manifold. The term -orbifold is made precise in
in <ref>, but can be read throughout this section as a surface in ℝ^N
that is “smooth almost everywhere”.
§.§ Gluing algorithms
The gluing algorithm constructs numerical approximations ρ of ρ and
Ω of Ω. Here, Ω is a surface in N dimensions,
where (as we explain below) N may be larger than N.
As in the linear formulation of <ref>, we start with the orbit graph 𝒢=(Γ,E),
but in this case weight the edges to obtain a weighted graph
𝒢_w = (Γ,E_w)
with weight(x,y):=d_(x,y) if (x,y)∈ E
0 if (x,y)∉E
.
The weighted graph provides approximate distances in quotient space.
The surface Ω is constructed from this graph by multidimensional scaling
(MDS) <cit.>.
MDS proceeds as follows: Let R be the matrix of squared geodesic distances, with entries
R_ij = (weighted path length from x_i to x_j in 𝒢_w)^2 .
Let 0<δ_1≤…≤δ_|Γ| be the eigenvalues and v_1,…,v_|Γ| the eigenvectors of the matrix
R̃ = -1/2(𝕀 - 1|Γ|𝕁) R (𝕀 - 1|Γ|𝕁)
where 𝕀=diag(1,…,1) and 𝕁=[ 1 ⋯ 1; ; 1 ⋯ 1 ] .
The embedding of each point x_i in the ε-net Γ is
then given by
ρ(x_i) := (√(δ_|Γ|) v_|Γ|,i
√(δ_|Γ|-1) v_|Γ|-1,i
⋮
√(δ_|Γ|-N) v_|Γ|-N,i) .
The dimension N is chosen to minimize error in the distances.
From ρ(x_1),…,ρ(x_|Γ|), the surface Ω and the map
ρ|_Π are obtained by interpolation.
Once ρ|_Π can be computed, we can also compute
ρ:=ρ |_Π∘ p,
since the projector p can be evaluated using <ref>.
The procedure satisfies two desiderata for constructing the orbifold map:
1) facets to be glued will be brought together, and 2) distances between interior points in Π will be approximately preserved.
The embedding is unique up to isometric transformations.
The embedding step is similar to the Isomap <cit.> algorithm, but
unlike Isomap embeds into a higher-dimensional space rather than a lower-dimensional one.
§.§ Example: Invariant neural networks
Given and Π, compute ρ and Ω using <ref>.
Choose a neural network
h_θ:ℝ^N→ℝ with parameter vector θ and set
f_θ:=h_θ∘ρ .
Then f_θ is a real-valued neural network on ℝ^n.
<ref> shows examples of f_θ for n=3, where
h_θ has three hidden layers of ten units each, with
rectified linear (relu) activations, although the input dimension N may vary according to the choice of
and Π. The parameter vector is generated at random.
Since most ways of performing interpolation in the construction of ρ are
amenable to automatic differentiation tools, this representation is easy to incorporate into machine learning pipelines.
Moreover, universality results for neural networks (e.g., <cit.>) carry over:
If a class of neural networks h_θ approximates to arbitrary precision in (ℝ^N), the
the resulting functions f_θ approximate to arbitrary precision in _ (though the approximation rate may change under composition with ρ). See <ref>.
§.§ Exact tilings
Although the properties of general orbifolds constitute one of the more demanding problems
of modern mathematics, orbifolds of crystallographic groups are particularly well-behaved, and are well-understood.
That we can draw directly on this theory is due to the fact that it uses a notion of gluing very similar to
that employed by our algorithms as a proof technique <cit.>.
The two notions align under an additional condition:
A convex polytope Π is exact for if tiles with Π, and if each
face S of Π can be represented as
S = Π∩ϕΠ for some ϕ∈ .
Not every Π with which tiles is exact—in <ref>, for example, the polytopes shown
for and are not exact, though all others are. However, given Π and , we can always
construct an exact surrogate as follows:
Choose any point x∈ℝ that is not a fixed point for any ϕ∈∖.
If is crystallographic, that is true for every point in the interior of Π. For each ϕ∈, the set
R_ϕ(x)
:= y∈ℝ^n | d_n(y,x)≤ d_n(y,ϕ x) ,
is a half-space in ℝ^n (see <ref>/left). The intersection
𝐃(x)
:= _ϕ∈ R_ϕ(x)
= _ϕ | ϕΠ∩Π≠∅ R_ϕ(x)
of these half-spaces is called a Dirichlet domain for (<ref>/right).
[<cit.>]
If is crystallographic,
𝐃(x) is an exact convex polytope for .
For illustration, consider the group : We start with a rectangle Π.
The group is generated by two glide reflections ϕ and ψ, each of which shifts
Π horizontally and then reflects it about one of its long edges (<ref>/left).
Exactness fails because the set Π∩ϕΠ, marked in black, is not a complete edge of Π.
A Dirichlet domain for this tiling differs significantly from Π (<ref>/right).
Although substituting 𝐃(x) for Π changes the look of the tiling,
it does not change the group—that is, we still work with the same set of transformations (rather than
another group in the same isomorphism class), and the axes of reflections are still
defined by the faces of Π rather than those of 𝐃(x).
§.§ Properties of embeddings
<ref> can be interpreted as computing a numerical approximation ρ
to a “true” embedding map ρ, namely the map in (<ref>) in the introduction.
Our main result on the nonlinear representation, <ref> below,
shows that this map indeed exists for every crystallographic group, and describes some of
its properties. The proof of the theorem shows that ρ and the set Ω can be constructed
by the following abstract gluing algorithm.
Abstract gluing construction.
1.) Glue: Identify each x∈∂Π with the unique point y∈∂Π satisfying x∼ y.
2.) Equip the glued set M with metric d_.
3.) Embed the metric space (M,d_) as a subset Ω⊂ℝ^N for some N∈ℕ.
4.) For each x∈Π, define ρ|_Π(x) as the representative of x on Ω.
5.) Set ρ:=ρ|_Π∘ p.
Since Π contains at least one point of each orbit, and the gluing step identifies all points identifies all points
on the same orbit with each other, the glued set M can be regarded as the quotient set ℝ^n/.
Recall that an embedding is a map M→Ω⊂ℝ^N that is a homeomorphism
(a continuous bijection with continuous inverse) of the metric spaces (M,d_) and (Ω,d_N).
The state the theorem, we need one additional bit of terminology: The stabilizer of x in is the set of all
ϕ that leave x invariant,
(x) := ϕ∈ | ϕ x=x
see <cit.>.
We explain the role of the stabilizer in more detail in the next subsection.
Let be a crystallographic group that tiles ℝ^n with an exact convex
polytope Π. Then the set M constructed by
gluing is a compact -orbifold that
is isometric to ℝ^n/. This orbifold can be
embedded into ℝ^N for some
n ≤ N < 2(n+max_x∈Π|(x)|) < ∞ ,
that is, there is compact subset Ω⊂ℝ^N such that the
metric space (Ω,d_N) is homeomorphic to (ℝ^n/,d_).
In particular, every point x∈Π is represented by one and only one point ρ_Π(x).
We can hence define a map
ρ:ℝ^n→Ω⊂ℝ^N
as ρ(x) := ρ_Π(p(x)) .
The map ρ is continuous, surjective, and -invariant.
A function f:ℝ^n→ Y, with values
in some topological space Y, is -invariant and continuous if and only if
f=h∘ρ for some continuous h:ℝ^N→ Y .
Ω is smooth almost everywhere, in the sense that
_nx∈Π | Ω∩ B_ε(ρ(x)) is not a manifold for any ε>0 = 0
where B_ε(z) denotes the open Euclidean metric ball of radius ε centered at z∈ℝ^N.
See <ref>.
(a) Note carefully what the theorem does and does not show about the
embedding algorithm in <ref>: It does say that the
glued set constructed by the algorithm discretizes an orbifold, and that an N-dimensional
embedding of this orbifold exists. It does
not show that the embedding computed by MDS matches this dimension—indeed, since
MDS attempts to construct an embedding that is also isometric (rather than just homeomorphic), we
must in general expect the MDS embedding dimension to be larger, and we have at present no proof
that an isometric embedding always exists.
(b) If the tiling defined by and Π is not exact,
we can nonetheless define an embedding ρ that represents continuous functions that are invariant
functions with respect to this tiling: Construct a Dirichlet domain 𝐃, and then construct ρ by applying the gluing
algorithm to 𝐃. Functions constructed as h∘ρ are then invariant for the tiling (,Π).
We have now seen different representations of continuous -invariant functions on ℝ^n, respectively
by continuous functions on Π, on the abstract space ℝ^n/, and on Ω. On Π, we must explicitly
impose the periodic boundary condition, so we are using the set
_ pbc(Π) := f̂∈(Π) | f̂ satisfies (<ref>) .
In these representations, the projector p, the quotient map q,
and the embedding map ρ play very similar roles.
We can make that observation more rigorous:
Given a crystallographic group that tiles with a convex polytope Π, consider the maps
I_Π:_ pbc(Π) →_ and
I_ℝ^n/:(ℝ^n/) →_ and
I_Ω:(Ω) →_
f̂ ↦f̂∘ p
ĝ ↦ĝ∘ q
ĥ ↦ĥ∘ρ
where I_Ω is only defined if Π is exact. Equip all spaces with the supremum norm.
Then I_Π and I_ℝ^n/ are isometric isomorphisms, and if Π
is exact, so is I_Ω. In particular, _ is always a separable Banach space.
By <ref>, (<ref>) and <ref>, all three maps are bijections.
We also have
f̂_sup = sup_x∈Π|f̂(x)|
= sup_x∈ℝ^n|f̂(p(x))|
= f̂∘ p_sup for f̂∈_ pbc(Π) ,
and the same holds mutatis mutandis on ℝ^n/ and Ω, so all
maps are isometries. Since Π is compact, (Π) is separable
<cit.>. The same hence holds for the closed subspace _ pbc(Π),
and by isometry for _.
§.§ Why the glued surface may not be smooth
Whether or not the glued surface is smooth depends on whether the transformations in leave any points invariant. It is a known fact in geometry (and made precise in the proof of <ref>) that
glued surface is a manifold ⇔ ϕ x ≠ x for all ϕ∈∖ and x∈ℝ^n .
That can be phrased in terms of the stabilizer as glued surface is a manifold ⇔ (x) = for all x∈ℝ^n .
It is straightforward to check that (x) is a group <cit.>.
Since each ϕ is an isometry,
and shifts of ℝ^n have no fixed points, ϕ(x)=x can only hold if b_ϕ=0.
Thus, (x) is always a subset of the point group _o (in the terminology of <ref>),
which means it is finite. To illustrate its effect on the surface, consider the following examples.
(a) Recall that MacKay's construction <cit.>, as sketched in the introduction,
can be translated to crystallographic groups by setting Π=[0,1] and choosing as shifts. In this case,
(x)= for each x∈ℝ, and the glued surface is a circle, which is indeed a manifold.
The two-dimensional analogue is to choose Π=[0,1]^2 and as the group in <ref>,
in which case the glued surface is a torus as shown in <ref>, and hence again a manifold.
(b) Now suppose Π is a triangle, x one of its corners, and ϕ a 120^∘ rotation
around x, as illustrated in <ref>. Then (x)=,ϕ,ϕ^2=ϕ^-1, and the glued surface Ω=ρ(Π) is a cone with ρ(x) as its tip.
That means Ω is not a manifold, because no neighborhood of the tip can be mapped isometrically to a neighborhood in ℝ^2.
§ INVARIANT KERNELS
Throughout this section, κ:ℝ^n×ℝ^n→ℝ is a kernel, i.e., a
positive definite function, and ℍ is its reproducing kernel Hilbert space, or RKHS.
<ref> reviews definitions.
We consider kernels that are -invariant in both arguments in the sense of (<ref>),
that is,
κ(ϕ x,ψ y) = κ(x,y) for all ϕ,ψ∈ and all x,y∈ℝ^n .
That is the natural notion of invariance for most purposes, since
such kernels are precisely those that define spaces of -invariant
functions:
If and only if κ is -invariant in each argument,
all functions f∈ℍ are -invariant.
If κ is also continuous, all f∈ℍ are continuous, and hence
ℍ⊂_.
<ref> implies that, to define an invariant kernel, we can start with any
kernel κ̂ on the embedding space ℝ^N, and compose it with the embedding
map ρ:
Let κ̂ be a kernel on ℝ^N. Then the function
κ(x,y):=κ̂(ρ(x),ρ(y))
or in short κ=κ̂∘(ρ⊗ρ)
is a kernel on ℝ^n that is -invariant
in both arguments. If κ̂ is continuous, so is κ.
That follows immediately from <ref> and the fact that
the restriction of a kernel to a subset is again a kernel <cit.>.
Suppose κ̂ is an radial basis function (RBF) kernel with length scale ℓ on ℝ^N,
and hence of the form κ̂(z,z')=exp(-z-z'^2/ℓ^2). Then κ is simply
κ(x,y)
= κ̂(ρ(x),ρ(z))
= exp(-||ρ(x)-ρ(y)||^2/2ℓ^2) .
<ref> illustrates this kernel the two-dimensional groups
and and the three-dimensional groups and .
Once we have constructed an invariant kernel, its application to machine learning problems is
straightforward. That becomes obvious if we define
Φ(x) = κ(x,), often called the feature map of κ
<cit.>.
Using the definition of the scalar product on ℍ and the reproducing property (see <ref>),
we then have
Φ:ℝ^n→ℍ and κ(x,y) = Φ(x),Φ(y)_ℍ .
If κ is -invariant, then Φ is also -invariant by construction.
Recall that most kernel methods in machine learning are derived by substituting
a Euclidean scalar product by Φ(x),Φ(y)_ℍ, thereby making a linear
method nonlinear. Using a -invariant kernel results in a -invariant
method.
[Invariant SVM]
A support vector machine (SVM) with kernel κ is determined by two finite sets of points 𝒳 and
𝒴 in ℝ^n. To train the SVM, one maps these points into ℍ via Φ, finds the shortest
connecting line between the convex hulls of Φ(𝒳) and Φ(𝒴), and determines
a hyperplane F that is orthogonal to this line and intersects its center—equivalently, in dual formulation, the
unique hyperplane that separates the convex hulls of Φ(𝒳) and Φ(𝒴)
and maximizes the ℍ-norm distance to both. The set of points x in ℝ^n whose
image Φ(x) lies on F is the decision surface of the SVM in ℝ^n.
The hyperplane can be specified by two functions g (an offset vector) and h (a normal vector) in ℍ:
A function f∈ℍ lies on F if and only if
f-g,h_ℍ =
0
or equivalently f,h_ℍ = g,h_ℍ .
Let x be a point in ℝ^n. If y and z are points with g=Φ(y) and h=Φ(z), then
x is on decision surface ⟺ κ(x,z) = κ(y,z) .
Since invariance of κ implies κ(ϕ x,z)=κ(x,z), that shows the decision surface is -invariant.
<ref> shows examples.
In these figures the data were randomly generated with regions assigned labels using a random function generated as in <ref>.
The support vectors are highlighted and illustrate the effects of symmetry constraints: the decision surface can be determined by data observed far away.
Two of the most important results on kernels are Mercer's theorem
and the compact inclusion theorem <cit.>. The latter shows the inclusion map
ℍ↪ is compact, and is used in turn to establish good
statistical properties of kernel methods, such as oracle inequalities and finite covering numbers
<cit.>. Both results assume that κ has
compact support. If κ is invariant under a crystallographic group,
its support is necessarily non-compact, but the next result shows that
versions of both theorems hold nonetheless:
If κ is continuous and -invariant in both arguments, the
inclusion map ℍ↪_ is compact.
There exist functions f_1,f_2,…∈ℍ and
scalars c_1≥ c_2≥…>0 such that
κ(x,y) = _i∈ℕc_if_i(x)f_i(y) for all x,y∈ℝ^n ,
and the scaled sequence (√(c_i)f_i) is an orthonormal basis of ℍ.
With this basis,
ℍ = f=_i∈ℕa_i√(c_i)f_i |
a_1,a_2,…∈ℝ with _i|a_i|^2<∞ ,
where each series converges in ℍ and hence (by compactness of inclusion)
also uniformly.
Intuitively, that is the case because every -invariant kernel is the pullback
of a kernel on Ω, and Ω is compact.
<ref> shows an application of such a kernel to generate a two-class classifier
with an -invariant decision surface.
§ INVARIANT GAUSSIAN PROCESSES
We now consider the problem of generating random functions
F:ℝ^n→ℝ such that each instance of F is
continuous and -invariant with probability 1. That can be done linearly
using the generalized Fourier representation, by generating the coefficients c_i
in <ref> at random. Here, we consider the nonlinear representation
instead: If we set
F := H∘ρ for a random continuous function H:ℝ^N→ℝ ,
<ref> implies that F is indeed continuous and -invariant with probability 1,
and hence a random element of
_. Conversely, the result also implies that every random
element of _ is of this form, for some random element H of (ℝ^N).
§.§ Almost surely invariant processes
Recall that a random function F:M⊆ℝ^n→ℝ is a
Gaussian process if
the joint distribution of the random vector (F(x_1),…,F(x_k)) is
Gaussian for any finite set of points x_1,…,x_k∈ M.
The mean and covariance function of a Gaussian process are defined as
μ(x) := [F(x)]
and κ(x,y) := [(F(x)-μ(x))(F(y)-μ(y)]
for x,y∈ M .
The covariance function is always positive definite, and hence a kernel on
M. The distribution of a Gaussian process is completely determined by μ and κ,
and conditions for F to satisfy continuity or stronger regularity conditions can
be formulated in terms of κ. See e.g., <cit.> for more
background.
Let H be a continuous Gaussian process on ℝ^N, with mean μ and covariance function κ.
Then F:=H∘ρ is a continuous random function on ℝ^n, and is -invariant
with probability 1. Consider any finite set of points
x_1,…,x_k∈ℝ^n such that
x_i≁x_j for all distinct i,j≤ k .
Then (F(x_1),…,F(x_k)) is a Gaussian random vector, with mean and covariance
[F(x_i)]=μ(ρ(x_i)) and Cov[F(x_i),F(x_j)]=κ(ρ(x_i),ρ(x_j))
for i,j≤ k .
Clearly, F cannot be a Gaussian process on ℝ^n: Since F is invariant,
F(x) completely determines F(ϕ(x)), so (F(x),F(ϕ x)) cannot be jointly Gaussian.
Put differently, conditioning F on its values on Π renders F non-random.
Loosely speaking, the proposition hence says that F is “as Gaussian” as a -invariant
random function can be. <ref> illustrates random functions generated by such a process.
The construction of <cit.> described in the introduction
was designed specifically for Gaussian processes, to generate periodic functions at random.
We can now generalize these processes from periodicity to crystallographic invariance:
Given and Π, construct the embedding map ρ:ℝ^n→ℝ^N.
Choose κ̂ as the RBF kernel (<ref>) on ℝ^N, and μ̂ as the constant function 0 on ℝ^N.
Then generate F as
H∼GP(μ̂,κ̂)
and
F := H∘ρ .
For visualization, draws can be approximated by the randomized feature scheme of
<cit.>. <ref> shows examples for chosen as
and on ℝ^2, and for and on ℝ^3.
§.§ Distributionally invariant processes
Another type of invariance that random functions can satisfy is distributional -invariance,
which holds if
F F∘ϕ for all ϕ∈ .
Here, denotes equality in distribution.
That is equivalent to requiring that
the distribution P of F satisfies P(ϕ A)=P(A) for every measurable set A.
For crystallographic groups, distributionally invariant Gaussian processes can
be constructed by factoring the parameters, rather than
the random function F, through the embedding in <ref>:
Let μ be a real-valued function and κ a kernel on ℝ^N.
If F is the Gaussian process on ℝ^n with mean μ∘ρ
and covariance function κ∘(ρ⊗ρ), then
F is distributionally -invariant, i.e. F∘ϕ F for all ϕ∈.
Almost sure invariance implies distributional invariance; distributional invariance is
typically a much weaker property.
Frequently encountered examples of distributional invariance are all forms of
stationarity (distributional invariance under shift groups) and of
exchangeability (permutation groups).
§ THE LAPLACE OPERATOR ON INVARIANT FUNCTIONS
The results in this section describe the behavior of the Laplace operator on
-invariant functions. All of these are ingredients in the proof of the Fourier representation.
We first describe the transformation behavior of differentials of invariant
functions, in <ref>. Gradients turn out to be invariant under shifts and
equivariant under orthogonal transformations.
Gradient vector fields, and more generally vector fields
with the same transformation behavior as gradients, have a cancellation property—their integral
orthogonal to the tile boundary vanishes (<ref>).
We then define the relevant solution space for the spectral problem, which has Hilbert space
structure (so that we can define orthogonality and self-adjointness) but has smoother elements than
Ł_2, in <ref>. Once the Laplacian has been properly defined on this space, we can
use the cancellation property to show it is self-adjoint.
§.§ Differentials and gradients of invariant functions
Given a differentiable function f:ℝ^n→ℝ^m, denote the differential at x as
Df(x)
= (∂ f_j∂ x_i)_i≤ n,j≤ m ∈ℝ^n× m .
The next result summarizes how invariance of f under a transformation ϕ affects Df.
Note the order of operations matters: D(f∘ϕ) is the differential of the function f∘ϕ, whereas
(Df)∘ϕ transforms the differential Df of f by ϕ.
If f:ℝ^n→ℝ^m is
invariant under an isometry ϕ x=A_ϕ x+b_ϕ and differentiable, then
(Df)(ϕ x) = Df(x)· A_ϕ^ .
If in particular f:ℝ^n→ℝ, the gradient satisfies
∇ f(ϕ x) = A_ϕ·∇ f(x) for all x∈ℝ^n .
The Hessian matrix H_f and the Laplacian satisfy
H_f(ϕ x) = A_ϕH_f(x)A_ϕ^ and Δ f(ϕ x) = Δ f(x)
for all x∈ℝ^n .
Since ϕ is affine, its differential (Dϕ)(x)=A_ϕ is constant. The chain rule
shows
D(f∘ϕ)
=
(Df)∘ϕ· (Dϕ)
=
((Df)∘ϕ)· A_ϕ .
By invariance, f∘ϕ and f are the same function, and hence D(f∘ϕ)=Df.
Substituting into the identity above shows (<ref>), since A_ϕ^=A_ϕ^-1.
For m=1, the transpose D^=∇ is the gradient, and (<ref>) becomes (<ref>). Using (<ref>), the Hessian can be written as
H_f = D(∇ f) = D(A_ϕ∇ f∘ϕ^-1) .
Another application of the chain rule then shows
H_f
=
D(A_ϕ∇ f)∘ϕ^-1· Dϕ^-1 =
A_ϕ(D∇ f)∘ϕ^-1· A_ϕ^-1 =
A_ϕ(H_f∘ϕ^-1)A_ϕ^ ,
which is the first statement in (<ref>). Since the Laplacian is the trace of H_f, and
the trace in invariant under change of basis, that implies
Δ f(ϕ x)
= tr(H_f(ϕ x))
= tr(A_ϕH_f(x)A_ϕ^)
= tr(H_f(x))
= Δ f(x) .
§.§ Flux through the tile boundary
The next result is the key tool we use to prove self-adjointness of the Laplacian.
We have seen above that the gradient of a -invariant function transforms
under according to (<ref>).
We now abstract from the specific function ∇ f,
and consider any vector field F:Π→ℝ^n
that transforms like the gradient on the tile boundary, i.e.
F(y) = A_ϕF(x) whenever y=ϕ x .
For a polytope Π with facets S_1,…,S_k, we define the
normal field on the boundary as
_Π:∂Π→ℝ^n
given by _(x) := _i if x∈ S_i^∘
0 otherwise
where _i is the unit normal vector of the facet S_i, directed outward with respect to Π.
In vector analysis, the projection F^_Π of a vector field onto the direction orthogonal
to ∂Π is known as the flux of F through the boundary.
Let be a crystallographic group that tiles ℝ^n with a convex polytope Π.
If a vector field F:Π→ℝ^n is integrable on ∂Π
and satisfies (<ref>), then
_∂F(x)^_(x)_n-1(dx) = 0 .
See <ref>.
§.§ The Sobolev space of invariant functions
The proof of <ref> follows a well-established strategy in spectral theory:
The relevant spectral results hold for self-adjoint operators, and self-adjointness can only
be defined with respect to an inner product. Since the space ^2 on which the Laplace
operator is defined is a Banach space, but has no inner product, one must hence first embed the problem into
a suitable Hilbert space. For the Laplacian, this is generally a first-order Sobolev space;
see <ref> for a review of definitions, and <cit.> for more
on spectral theory and the general approach.
In our case, we proceed as follows:
Since invariant functions are completely determined by their values on Π, we can equivalently
solve the problem on the bounded domain Π rather than the unbounded domain ℝ^n.
That gives us access to a number of results specific to bounded domains.
We also observe that the invariance constraint e=e∘ϕ is a linear constraint—if two functions
satisfy it, so do their linear combinations—so the feasible set of this constraint is a vector space,
and we can encode the constraint by restriction to a suitable subspace.
We start with the vector space
ℋ := f|_Π^∘ | f:ℝ^n→ℝ infinitely often differentiable and -invariant .
The elements of ℋ are hence infinitely often differentiable on Π^∘, and their
continuous extensions to the closure Π satisfy the periodic boundary condition (<ref>).
We then define the Sobolev space of candidate solutions as
_̋ := closure of ℋ in ^̋1(Π^∘) ,
equipped with the norm and inner product of ^̋1(Π^∘).
As a closed subspace of a Hilbert space, it is a Hilbert space.
§.§ The Laplace operator on _̋
We now have to extend Δ to all elements of _̋. In general,
a linear operator Λ on a closed subspace V⊂^̋1(Π^∘) is an
extension of Δ to V if it satisfies
Λ f = Δ f for all f∈ V∩^2(Π^∘) .
The extended operator is self-adjoint on V if
Λ f,h_^̋1 = f,Λ h_^̋1 for all f,h∈ V .
To prove self-adjointness, one decomposes Λ as
-Δ f,h_Ł_2 =
(integral over Γ^∘ that is symmetric in f and h)
-
(integral over ∂Γ) .
This is the Green identity alluded to in the introduction.
To make it precise, we need two quantities: One is the energy form
or energy product
a(f,h) := _Γ∇ f(x)^∇ h(x) _n(dx) .
Since it only involves first derivatives, and both appear under the integral, it is
well-defined for any f,h∈^̋1(Π^∘), and is hence a symmetric bilinear form
a:^̋1×^̋1→ℝ. It is positive definite, since
a(f,h) = _i≤ n∂_i f,∂_i h_Ł_2 and hence
a(f,f) = _i≤ n∂_if_Ł_2 ≥ 0 .
Substituting the definition of a into that of the ^̋1 scalar product
in (<ref>) shows that
f,h_^̋1 = f,h_Ł_2 + a(f,h)
for all f,h∈^̋1(Γ^∘) .
The second quantity is the conormal derivative
∂_f(x) := ∇ f(x)^_Π(x) .
The precise statement of the decomposition above is then as follows.
[Green's identity]
If the domain Π is sufficiently regular—in particular, if Π is a convex polytope—then
-Λ f,h_Ł_2 =
a(f,h)
- _∂Γ∂_f(x)h(x)_n-1(dx)
for f,h∈^̋1() .
Informally, this shows that Δ “behaves self-adjointly” in the interior
of Π, where derivatives can be computed in all directions around a point.
At points on ∂Π, the boundary truncates derivatives in some direction,
and that requires a correction term ∂_f.
Let be a crystallographic group that tiles ℝ^n with a convex polytope Π.
Then Δ has a unique extension to a linear operator Λ on _̋.
This operator is self-adjoint and continuous on _̋, and satisfies
(i) -Λ f,h_Ł_2 = a(f,h)
and (ii) -Λ f,f_^̋1 ≥ f^2_^̋1-f^2_Ł_2
for all f,h∈_̋.
The proof uses the flux property to show that crystallographic symmetry
makes the boundary term cancel. Since a is symmetric, that makes Λ self-adjoint.
In the parlance of elliptic differential equations, (<ref>ii) says
that Λ is coercive on _̋ (see <cit.>).
See <ref>.
§.§ Linear representations from a nonlinear ansatz
The properties of Laplace operators lead naturally to a class of numerical approximations
known as Galerkin methods (e.g., <cit.>). Using the embedding map
ρ, we can derive a Galerkin method that can be used to compute the Fourier basis functions
in <ref>—that is, we can use the nonlinear representation approach in the
numerical approximation of the linear representation. The Galerkin method
can be more accurate than the spectral approach in <ref>,
and was used to render Figures <ref>, <ref>
and <ref>.
Galerkin methods posit basis functions
χ_1,…,χ_m and approximate an infinite dimensional function space by the finite-dimensional subspace
spanχ_1,…,χ_m.
In our case, we approximate solutions e of (<ref>) by approximating their
restrictions e|_Π. We hence need functions χ_i:Π→ℝ.
W we start with functions χ̃_i:ℝ^N→ℝ, and set
χ_i:=χ̃_i∘ρ. We then assume e|_Π of (<ref>) is in the
span, and hence of the form
e|_Π = _i≤ mc_iχ_i .
If e solves the eigenvalue problem (<ref>), e|_Π satisfies
-Δ e|_Π,χ_j_Ł_2 = λe|_Π,χ_j_Ł_2 for all j≤ m .
Applying (<ref>) and substituting in (<ref>) shows
a(e|_Π,χ_j) = λe|_Π,χ_j_Ł_2 and _ic_ia(χ_i,χ_j) = λ_ic_iχ_i,χ_j_Ł_2 .
If we define matrices with entries A_ij:=a(χ_i,χ_j)
and B_ij:=χ_i,χ_j_Ł_2, that becomes
Ac = λ Bc where c=(c_1,…,c_m)^ .
The entries of A and B can be computed with off-the-shelf cubature
methods, and we can then solve for the pair (λ,c).
(a) If ρ and the basis functions are implemented with JAX <cit.> or a similar
automatic differentiation tool, the gradients in (<ref>) are available, which avoids finite
difference approximation and explicit computation of second derivatives.
(b) Neumann boundary conditions for reflections (see <ref>) can be
enforced using the methods of <cit.>.
(c) The basis functions χ̃_i can be almost any basis on ℝ^N.
Figures <ref>–<ref> were rendered by placing points x_1,x_2,… uniformly on Π,
and centering radial basis functions at the points ρ(x_i) in ℝ^N.
§ RELATED WORK AND ADDITIONAL REFERENCES
In machine learning.
There has been substantial work on group invariance and equivariance in machine learning, with a focus on finite and compact groups.
Most salient has been work on approximate translation invariance and equivariance in convolutional neural networks for images <cit.> and speech <cit.>, although this work has not been framed in a group-theoretic way.
To our knowledge the earliest explicit consideration of compact and finite group structure in machine learning was from a Fourier perspective by <cit.>; this was primarily in the context of Hilbert-space formalisms of learning.
The current perspective on compact and finite group equivariance in deep learning arose largely from <cit.>.
There has been widespread application of machine learning models when group invariance or equivariance is desired, e.g., permutation invariance for sets <cit.> and equivariance for neural auction design <cit.>.
In the natural sciences, rotation invariance has been used for astronomy <cit.> and E(3) equivariance has proved important for molecular applications <cit.>.
Permutation equivariance of transformer architectures plays a crucial role in large language model <cit.>.
In crystallography.
Crystallographers have completely described the 17 two-dimensional and 230 three-dimensional
crystallgraphic groups and various tilings they describe,
and tabulated many of their properties <cit.>.
The emphasis in this work differs somewhat from that in mathematics—in particular,
work in crystallography emphasizes polytopes Π that occur in crystal structures
(and which are not necessarily exact in the terminology used in <ref>),
whereas more abstract work in geometry tends to work with Dirichlet domains or other exact tilings.
A long line of work in the context of X-ray crystallography modifies the matrices
that occur in fast Fourier transforms (FFTs) to speed up computation if a crystallographic symmetry
is present in the data. This starts with the work of
<cit.> and <cit.>, see also <cit.>.
The introduction of <cit.> gives an overview.
This work does not attempt to derive invariant Fourier bases.
In Fourier and PDE analysis.
As we have already explained in some detail, the special case of <ref>
for Π=[0,1]^n and =ℤ^n yields
the Fourier transform. For this problem, the periodic boundary condition can be
replaced by a Neumann condition, and spectral problems with Neumann conditions
are standard material in textbooks <cit.>.
For shifts that are not axis-parallel, the
periodic boundary condition is known
as a Born-von Karman boundary condition <cit.>. We are not aware of
extensions to crystallographic groups.
An introduction to the PDE techniques used in our proofs
can be found in <cit.>. The conditions imposed there are too restrictive
for our problems, however; a treatment general enough to cover all results we use
is given by <cit.>.
In geometry.
<cit.> coined the term orbifold in the
1970s. Commonly cited references include <cit.>;
<cit.> has a detailed bibliography.
These all focus on general groups, however, for which the theory is much harder than in our case.
The quotient space structure of crystallographic groups was already understood much earlier
by the Göttingen and Moscow schools <cit.>.
A readable introduction to isometry groups and their quotients is given by <cit.>.
The comprehensive account of <cit.> is
more demanding, but covers all results needed in our proofs.
<cit.> cover the
geometric aspects of crystallographic groups. <cit.>
explain the geometry of orbifolds heuristically, with many illustrations.
§ SOME OPEN PROBLEMS
Our approach raises a range of further questions well beyond the scope of the present paper,
including in particular those concerning numerical and statistical accuracy. We briefly
discuss some aspects of this problem.
Linear representation.
Suppose we represent a -invariant continuous function f
by evaluating the generalized Fourier basis in <ref>
using the spectral algorithm in <ref>.
The algorithm returns numerical approximations ê_1,ê_2,… of the
basis functions. We may then expand f as
f ≈ _i=1^m c_iê_i .
There are three principal sources of error in this representation:
* The truncation error, since m is finite.
* Any error incurred in computation of the coefficients c_i.
* The error incurred by approximating the actual basis functions _i by ê_i.
The truncation error (1) concerns the question how well the vector space span_1,…,_m
approximates the space _ or Ł_2(Π). This problem is
studied in approximation theory. Depending on the context, one may choose the first m basis vectors
(a strategy called “linear approximation” in approximation theory), or greedily choose those m basis
vectors that minimize some error measure (“nonlinear approximation”), see <cit.>.
Problem (2) depends on the function f, and on how it is represented computationally. If f must
itself be reconstructed from samples, the coefficients are themselves estimators and incur statistical
errors.
The error immediately related to our method is (3), and
for the method of <Ref> depends on how well the graph Laplacian used
in <ref> approximates the Laplacian Δ.
This problem has been studied in a number of fields, including machine learning in the context of dimensionality
reduction <cit.> and numerical mathematics in the context of
homogenous Helmholtz equations <cit.>, and is the subject of a rich literature
<cit.>.
Available results show that, as ε→ 0 in the ε-net, the matrix L converges to Δ,
where the approximation can be measures in different notions of convergence, in particular pointwise and spectral convergence.
The cited results all concern the manifold case. We are not aware of similar results for orbifolds.
For the method of <Ref>, the error depends largely on the choice of basis in ℝ^N and the accuracy of the numerical integrals, as well as the orbifold map approximation itself (see below).
Error analysis of the Rayleigh-Ritz method has a long history, see, e.g., <cit.>.
Nonlinear representation.
If we define a -invariant statistical or machine learning model on ℝ^n by factoring it through an
orbifold, one may ask approximation questions of a more statistical flavor:
Suppose we define a class ℋ=h_θ|θ∈ T of functions h_θ:ℝ^N→ℝ
on the embedding space ℝ^N, with some parameter space T.
We then define a class ℱ of -invariant functions on ℝ^n as
ℱ := f_θ|θ∈ T where
f_θ := h_θ∘ρ .
Depending on the context, we may think of the functions f_θ e.g., as neural networks or regressors.
The task is then to conduct inference, i.e., to compute a point estimate θ̂ of θ
(say by maximum likelihood estimation or empirical risk minimization), or to compute a posterior
on T in a Bayesian setup.
Since ℋ and ℱ share the same parameter space, any such inference task
can be “pushed forward” forward to the embedding space, that is,
inference under ℱ given x_1,…,x_n
= inference under ℋ given ρ(x_1),…,ρ(x_n)
The error can again be separated into components:
* The statistical error associated with fitting h_θ|θ∈ T.
* The “forward distortion” introduced by the map x_j↦ρ(x_j).
* The “backward distortion” introduced by the map h_θ↦ h_θ∘ρ.
Problem (1) reduces to the statistical properties of ℋ, and depends on both the
model and the chosen inference method. Problem (2) and (3), however, raise a number of new questions:
The map ρ is, by <ref>, bijective (which means it does not introduce
identifiability problems) and continuous. As the proof of <ref> shows,
it also preserves density properties of certain function spaces, which can be thought of
as a qualitative approximation result. Quantitative results are different matter:
To bound the effect of transformations on
statistical errors typically requires a stronger property than continuity, such as differentiability
or at least a Lipschitz property. In results on manifold learning, the curvature of
Ω often plays an explicit role. Orbifolds introduce a further challenge,
since smoothness properties fail at the
tips and edges introduced by points with non-trivial stabilizers.
On the other hand, non-differentiabilities of crystallographic orbifolds
have lower-bounded opening angles <cit.>—note the tip of the cone in
<ref>, for example, is not a cusp—so it may be possible to mitigate these problems.
§.§ Acknowledgements
The authors would like to thank Elif Ertekin and Eric Toberer for valuable discussions.
RPA is supported in part by NSF grants IIS-2007278 and OAC-2118201. PO is supported by
the Gatsby Charitable Foundation.
Department of Computer Science
Princeton University
<https://www.cs.princeton.edu/ rpa>
Gatsby Computational Neuroscience Unit
University College London
<https://www.gatsby.ucl.ac.uk/ porbanz>
Appendix
The first three sections of this appendix provide mathematical background on isometries
(<ref>), function spaces and smoothness (<ref>), orbifolds
(<ref>), and spectral theory (<ref>). The proof of the Fourier representation is subdivided into three parts:
We first prove of the flux property, <ref>, in <ref>, and
<ref> on self-adjointness of the Laplacian in <ref>.
Using these results, we then prove the Fourier representation in <ref>.
The proof of the embedding theorem (<ref>) follows in <ref>.
<ref> collects all proofs on kernels and Gaussian processes.
§ BACKGROUND I: ISOMETRIES OF EUCLIDEAN SPACE
Isometries are invertible functions that preserve distance. To define
an isometry between two sets V and W, both must be equipped with metrics,
say d_V and d_W.
A map ϕ:→ is then an isometry if it is one-to-one
and satisfies
d_W(ϕ(v_1),ϕ(v_2)) = d_V(v_1,v_2)
for all v_1,v_2∈ V .
Since this implies ϕ is Lipschitz, isometries are always continuous.
If W=V, then ϕ is necessarily bijective.
An isometry of ℝ^n is a bijection ϕ:ℝ^n→ℝ^n that satisfies
d_n(ϕ x,ϕ y) = d_n(x,y) for all x,y∈ℝ^n .
Identity (<ref>) shows that every isometry can be uniquely
represented as an orthogonal transformation followed by a shift.
Loosely speaking, an isometry may shift, rotate, or flip M, but cannot change its shape or volume.
Recall that a set of functions ℝ^n→ℝ^n
is a group if
it contains the identity map , and if ϕ,ψ∈ implies ϕ∘ψ∈
and ϕ^-1∈.
The set of all isometries of ℝ^n forms a group, called the
Euclidean group of order n.
§.§ More on crystallographic groups
Representation by shifts and orthogonal transformations.
Since every isometry can be decomposed into an orthogonal transformation
and a shift according to (<ref>), every crystallographic group has two natural subgroups:
One is the group
_o := ϕ∈ | ϕ(x)=Ax for some A∈𝕆_n = ∩𝕆_n
of purely orthogonal transformations. This is an example of a point group,
since all its elements have a common fixed point (namely the origin).
It is always finite: Fix any x on the unit sphere in ℝ^n. Then ϕ(x) is also on the sphere for every ϕ∈_o, since A_ϕ is orthogonal. However, discreteness requires there can only be finitely many such points ϕ(x) on the sphere.
The other is the group of pure shifts,
_t := ϕ∈ | ϕ(x)=x+b for some b∈ℝ^n .
One can show there are
linearly independent vectors b_1,…,b_n such that
_t := x↦ x+b | b=a_1b_1+…+a_nb_n for a_1,…,a_n∈ℤ .
Thus, the generating set for a crystallographic group on ℝ^n always includes n linearly independent shifts.
§.§ Equivalence to definitions in the literature
Our definition of a crystallographic group in <ref> differs
from those in the literature—we have chosen it for simplicity, but must verify it is equivalent.
There are two standard definitions of crystallographic groups: Perhaps the most common one, used for example by
<cit.>, is as a discrete group of isometries for which ℝ^n/ is compact in the quotient topology.
Another is as a group of isometries of ℝ^n such that ℝ^n/ has finite volume when identified with a subset of Euclidean space
<cit.>. These are known to be equivalent <cit.>.
Our definition is equivalent to both:
A group is crystallographic in the sense of <ref>
if and only if it is a discrete group of isometries of ℝ^n such
that ℝ^n/ is compact.
If is crystallographic in our sense,
it is discrete (see <ref>), and ℝ^n/ is compact
by <ref>, so it satisfies the second definition above.
Conversely, if satisfies Thurston's definition,
it tiles ℝ^n with some set Π. This set can always be chosen as a convex polytope
<cit.>, so is crystallographic in our sense.
We note only en passe that there are tilings that cannot be described
by a group of isometries. That is not at all obvious—the question was one of Hilbert's
problems—but counter-examples of such tilings (with non-convex polytopes) are now
known <cit.>.
§ BACKGROUND II: FUNCTION SPACES
This section briefly reviews concepts from functional analysis that play a role in the proofs.
Helpful references include <cit.> on general functional analysis
and Banach spaces, <cit.> on Sobolev spaces,
<cit.> on reproducing kernel Hilbert spaces, and <cit.>
on compact operators.
§.§ Spans and their closures
Consider a Banach space V and a subset ℱ⊂ V.
The span of ℱ is the set
span(ℱ)
=
_i≤ nc_if_i | n∈ℕ,c_i∈ℝ,f_i∈ℱ
of finite linear combinations of elements of ℱ.
Since function spaces are typically infinite-dimensional, we also consider
infinite linear combinations. These are defined with respect to a norm :
f = _i∈ℕc_if_i
means f-_i≤ nc_if_i → 0 as n→∞ .
In other words, to get from the span to the set of infinite linear combinations,
we take the closure in the relevant norm:
_i∈ℕc_if_i | c_i∈ℝ,f_i∈ℱ = span(ℱ)
§.§ Bases
A Hilbert space ℍ is a Banach space whose norm is induced by an inner product ,_ℍ,
that is,
f_ℍ = √(f,f_ℍ) .
A sequence f_1,f_2,… in a Hilbert space is an
orthonormal system if f_i,f_j=δ_ij, where δ is the Kronecker symbol
(the indicator function of i=j).
An orthonormal system is complete if its span is dense in ℍ, that is, if
ℍ = spanf_1,f_2,… ,
where the closure is taken in the norm of ℍ. A complete orthonormal system
is also called an orthonormal basis. If f_1,f_2,… is an orthonormal
basis, ℍ can be represented as
ℍ = {_i∈ℕc_if_i
(convergence in _ℍ) |
c_1,c_2,…∈ℝ with _ic_i^2<∞} .
§.§ Ł_2 spaces
For any M and a
σ-finite measure ν on M, the Ł_2-scalar product and pseudonorm are
f,g_Ł_2(ν) := _Mf(x)g(x)ν(dx)
and f_Ł_2(ν):=√(f,f_Ł_2(ν)) .
To make _Ł_2 a norm, one defines the equivalence classes
[f]:=g | f-g_Ł_2=0 of functions identical outside a null set, and
the vector space
Ł_2(ν) := [f] | f:M→ℝ and f_Ł_2<∞
of such equivalence classes, which is a separable Hilbert space.
Although its elements are not technically functions,
we use the notation f∈Ł_2 rather than [f]∈Ł_2.
We write Ł_2(ℝ^n) and Ł_2(Π) respectively if ν is Euclidean volume
on ℝ^n or on Π.
See <cit.> or <cit.> for background on Ł_2 spaces.
§.§ Reproducing kernel Hilbert spaces
Consider a set M⊆ℝ^n.
A symmetric positive definite function κ:M× M→ℝ
is called a kernel.
A kernel defines a Hilbert space as follows: The formula
_ia_iκ(x_i,), _jb_jκ(y_j,)_ℍ := _i,ja_ib_iκ(x_i,y_j) for a_i,b_j∈ℝ and x_i,y_j∈ M
defines a scalar product on spanκ(x,)|x∈ M.
The closure
ℍ := spanκ(x,)|x∈ M with respect to the norm f_κ := √(f,f_ℍ)
is a real, separable Hilbert space with inner product ,_ℍ, called the
reproducing kernel Hilbert space or RKHS of k.
Every RKHS satisfies the “reproducing property”
f(x) = f,κ(x,)_ℍ for all f∈ℍ and all x∈ M .
In particular, κ(x,y)=κ(x,),κ(y,)_ℍ.
If f_1,f_2,… is an orthonormal basis of ℍ, then
κ(x,y) = _i∈ℕf_i(x)f_i(y) for all x,y∈ M .
If ℍ is an RKHS, the map f↦ f(x) is continuous for each x∈ M.
Conversely, if ℍ is any Hilbert space of real-valued functions on M, and if
the maps are continuous on ℍ for all x∈ M, there
is a unique kernel satisfying (<ref>) that generates
ℍ as its RKHS.
§.§ Spaces of continuous functions
For any set M, the vector space (M) of continuous functions equipped with the
norm _sup is a Banach space. It is separable if M is compact
<cit.>.
In the proof of the spectral theorem, we must also consider the set
_u(M) := f∈(M) | f uniformly continuous ,
and the compactly supported functions
_c(M) = f∈(M) | f=0 outside a compact set K⊂ M .
We recall some basic facts from analysis that are used in the proofs:
[<cit.>]
(i) Every continuous function on a compact set is uniformly continuous.
(ii) Every uniformly continuous function f on a set M⊆ℝ^n has a unique continuous
extension f̅ to the closure M. Its value at a boundary point x∈∂ M is given by
f̅(x)=lim_j(x_j) for any sequence of points x_i∈ M with x_i→ x.
§.§ Smoothness spaces
Smoothness spaces quantify the smoothness of functions in terms of a norm.
Two types of such spaces play a role in our results, namely ^k spaces
and Sobolev spaces. Both define smoothness via derivatives:
We denote partial derivatives as
∂^αf
:= ∂^|α|f/∂ x_1^α_1⋯∂ x_n^α_n where α=(α_1,…,α_n)∈ℕ^n and |α|:=α_1+…+α_n .
If we are taking a derivative with respect to the ith coordinate, we use a subscript,
∂_i f := ∂ f∂ x_i
The set ^k of k times continuously differentiable functions can then be
represented as
𝐂^k(M)
:= f∈𝐂(M) | ∂^αf∈𝐂(M)
whenever
|α|≤ k where k∈ℕ∪0,∞ .
Since that means the norm of 𝐂 is applicable to ∂^αf,
we can define
f_𝐂^k := f_sup
+
_|α|≤ r∂^α f_sup .
It can be shown that this is again a norm, and that it makes ^k a Banach space
<cit.>.
^k functions are uniformly continuous, and even very smooth functions approximate
elements of Ł_2 to arbitrary precision:
Let M⊆ℝ^n be a set.
(i) If f∈^k(M) for k≥ 1, then f and its first k-1 derivatives are uniformly continuous.
(ii) The set _c(M)∩^∞(M) is dense in Ł_2(M).
The ^k norms measure smoothness in a worst-case sense. To measure average smoothness instead,
we can replace the sup norm by the Ł_2(M)-norm: The function
f_^̋k := f_Ł_2
+
_|α|≤ k∂^α f_Ł_2 ,
is a norm, called the Sobolev norm of order k. It makes the set
^̋k(M):=f∈Ł_2(M) | f_^̋k<∞ = f∈Ł_2(M) | ∂^α f∈Ł_2(M)
a Banach space, and even a Hilbert space, called the Sobolev space of order k.
We will only work with the spaces ^̋1(M). A inner product on ^̋1(M) is given by
f,g__̋1 := f,g_Ł_2 + _i≤ n∂_i f,∂_i g_Ł_2 .
The Sobolev norms are stronger than the Ł_2 norm: We have
f_Ł_2(M) ≤ c_Mf__̋1(M) for all f∈Ł_2(M) and some c_M>0 .
Consequently, the approximation property in <ref>(ii) does not
necessarily hold in the Sobolev norm. Whether it does depends on whether the geometry of
the domain M is sufficiently regular:
Let Γ be a Lipschitz domain (such as a convex polytope). Then
_c(Γ)∩^∞(Γ) is dense in ^̋1(Γ^∘).
A readable introduction to Sobolev spaces is given by <cit.>.
The monographs of <cit.> and <cit.> are comprehensive accounts.
§.§ Inclusion maps
If V⊂ W are two sets, the inclusion map or injection map ι:V↪ W
is the restriction of the identity on W to V. Loosely speaking, ι maps each point v in V to itself,
but v is regarded as an element of V and its image ι(v) as an element of W.
This distinction is not consequential if V and W are simply sets without further structure, but if both
are equipped with topologies, the properties of ι encode relationships
between these topologies.
Continuous inclusions. Suppose both V and W are equipped with topologies.
Call these the V- and W-topology. The restriction of the W-topology to V, often called the
relative W-topology, consists of all sets of the form A∩ V, where
A⊂ W is open in W. Since A∩ V is precisely the preimage ι^-1A, and
continuity means that preimages of open sets are open, we have
ι continuous ⟺ the V-topology is at least as fine as the restricted W-topology.
Inclusions between Banach spaces.
Let T:V→ W be a map from a Banach space V to another Banach space W.
If such a map is linear, it is called a linear operator. It is continuous if and only if it is bounded,
sup_v∈ VT(v)_W/v_V < ∞ or equivalently T(v)_W ≤ cv_V for some c>0 .
If V is a vector subspace of W, then ι is automatically linear, so it is continuous iff
v_W = ι(v)_W ≤ cv_V .
Saying that ι is continuous is hence another way of saying that _V is stronger than _W. If V and W are smoothness
spaces, continuity of ι can hence often be interpreted as the elements V being smoother than those of W.
A set A⊂ V is norm-bounded if
sup_v,v'∈ Av-v'_V < ∞ .
A linear operator between Banach spaces is compact if the image T(A) of every norm-bounded set A⊂ V
has compact closure in W <cit.>. The inclusion is hence compact iff
A⊂ V is bounded in _V ⟹ the _W-closure of A in W is compact.
If V and W are smoothness spaces, the inclusion is often compact if
V is in some suitable sense smoother than W. The well-known Arzela-Ascoli theorem <cit.>, for example, can be interpreted in
this way. For Sobolev spaces, a family of results known as Rellich-Kondrachov theorems <cit.> shows that, under suitable conditions on the domain,
inclusions of the form ^̋k+m↪^̋k and ^̋k+m↪^k exist and are compact if
the difference m in smoothness is large enough. The following version is adapted to our purposes:
Let Π be a polytope and M⊆ an open set. Then
^̋k+1(M)⊂^k(M) for k≥ 0, and the inclusion map is compact.
Since Π is a polytope, it has the strong local Lipschitz property in the terminology of
<cit.>.
By the relevant version of the Rellich-Kondrachov theorem, that implies that the set of restrictions of
functions in ^̋k+1() from to M is a compactly embedded subset
of ^k(M) <cit.>. The image of ^̋k+1() under the projection f↦ f|_M
is precisely ^̋k+1(M) <cit.>.
§ BACKGROUND III: ORBIFOLDS
In this section, we give a rigorous definition of orbifolds and review those results from the literature required for our proofs.
For more background, see
<cit.>.
<cit.> provides an accessible introduction to gluing
and quotient spaces. Most results below are adapted from the monograph of
<cit.>. Ratcliffe's formalism is very general and
can be simplified significantly for our purposes. We state results here in just
enough generality to apply to crystallographic groups.
§.§ Motivation: Manifolds
To motivate the somewhat abstract definition of an orbifold, we start with that of a manifold,
and then generalize to orbifolds below.
Recall that a set M is a manifold if its topology “locally looks like ℝ^n”.
This idea can be formalized in a number of ways. We first give a definition using a metric, which is of
the form often encountered in machine learning and statistics. We then generalize the metric definition to
a more abstract one that brings us almost to orbifolds as we see in the following section.
Metric definition.
Let M be a set equipped with a metric d_M. We then call M a manifold if,
for every u∈ M, we can choose a sufficiently small ε(u)>0 such that the d_M-ball around u of radius ε is isometric to a d_n-ball of the same radius in ℝ^n.
There is, in other words, an isometry
θ_u: B_ε(u)(u,d_M) → B_ε(u)(θ_u(u),d_n) ⊂ ℝ^n
for each u∈ M .
For example, the circle, equipped with the geodesic distance, is a manifold in the sense of this definition:
It is not possible to map the entire circle isometrically to a subset of ℝ.
However, the ball B_ε(u)(u,d_M) around a point u is a semiarc, drawn in black below:
[>=stealth]
[thick,domain=0:360,gray!50!white] plot (cos(), sin());
[very thick,domain=-30:30,black] plot (cos(), sin());
[draw,circle,fill,scale=.3] (x) at (1,0) ;
[thick,gray!50!white] (5,1.5)–(5,-1.5);
[very thick,black] (5,.6)–(5,-.6);
[draw,circle,fill,scale=.3] (y) at (5,0) ;
[->] ((x)+(.5,0))–((y)+(-.5,0)) node[pos=.5,label=above:θ_u] ;
at ((x)+(.2,-.2)) u;
[anchor=west] at ((y)+(.1,0)) θ_u(u);
[anchor=west,gray] at ((y)+(0,1.2)) ℝ;
[gray] at (-.9,.9) M;
This semiarc can be mapped isometrically to an open interval in ℝ, and the same is true for the
ball around any other point.
Coherence property.
Before we generalize this definition, we observe that it implies a coherence property
of the maps θ_u. Suppose the balls around two points v and w in M overlap, and
u is in both balls. We can then find a sufficiently small ε>0 such that
B_ε(u,d_M) is completely contained in both balls. Since both maps θ_v and θ_w
are applicable to the points in this ball, the restrictions
θ_v:B_ε(u,d_M)→ B_ε(θ_v(u),d_n)
and θ_w:B_ε(u,d_M)→ B_ε(θ_w(u),d_n)
are both isometries. The points x=θ_v(u) and y=θ_w(u) are images under different
maps, and the balls around them are not required to overlap. Both are, however, Euclidean
balls of the same radius. If ψ=y-x is the (unique) shift of ℝ^n that maps x to y,
we hence have
B_ε(θ_v(u),d_n)=ψ B_ε(θ_w(u),d_n) .
Now observe that ψ x=y=θ_wθ_v^-1(x). There is, in summary, a shift ψ such that
ψ x=y and θ_wθ_v^-1(z) = ψ z for all z in the ball B_ε(x,d_n) .
The definition hence implies that the map θ_wθ_v^-1, often called a coordinate change
in geometry, behaves like a shift on a sufficiently small neighborhood.
When we drop the metric from the definition below, this property no longer arises
automatically, and we must make it an explicit requirement.
Abstract definition.
Let 𝔽
be a group of isometries of ℝ^n. The next definition
generalizes the one above in two ways: It does not use a metric, and instead of requiring
that coordinate changes look locally like shifts, it requires they look locally like elements of 𝔽.
A Hausdorff space M is a 𝔽-manifold if:
* There is a family U_i_i∈ℐ of open connected subsets of M that cover M,
i.e., each point of M is in at least one set U_i. The set ℐ is an arbitrary index set.
* For each i∈ℐ, there is a
homeomorphism θ_i:U_i→ V_i of U_i and an open set V_i⊂ℝ^n.
* If two sets U_i and U_j overlap, the maps θ_i and θ_j cohere as follows:
If x and y are points in ℝ^n that satisfy
θ_jθ_i^-1(x) = y ,
then there is a transformation ϕ∈𝔽 such that
ψ x = y
and θ_j^-1θ_i(z) = ψ z
for all z in a neighborhood of x .
We recover the metric definition if we make M a metric space (which is always Hausdorff),
set ℐ=M, choose U_i as the ball around i=u (which is always connected),
and θ_i as the isometry θ_u (isometries are homeomorphisms).
§.§ Orbifolds
To capture the properties of the quotient ℝ^n/, the definition of a manifold
is in general too restrictive. That follows from the following result:
[<cit.> Theorem 7.8]
Let be a crystallographic group that tiles ℝ^n with a convex polytope.
For every point x∈ℝ^n, there exists an ε>0 such that the
open metric ball B_d_((x),ε) in the quotient space ℝ^n/ and the
quotient B_d_n(x,ε)/ Stab(x) of the corresponding open ball in
ℝ^n are isometric.
We note this is precisely the metric definition of a manifold above if (x)=
for all points in ℝ^n/. It follows that, for a crystallographic group 𝔾,
ℝ^n/ is a manifold ⟺ no element of has a fixed point.
Let 𝔽 be a group of isometries of ℝ^n. An 𝔽-orbifold is a
Hausdorff space M with the following properties:
* There is a family U_i_i∈ℐ of open connected subsets of M that cover M,
i.e., each point of M is in at least one set U_i.
* For each i∈ℐ, there is a discrete group F_i of isometries of ℝ^n
and a homeomorphism θ_i:U_i→ℝ^n/F_i of U_i and an open subset of the
quotient space ℝ^n/F_i.
* If two sets U_i and U_j overlap, the maps θ_i and θ_j cohere as follows:
If x and y are points in ℝ^n, and the corresponding points F_ix∈ℝ^n/F_i
and F_jy∈ℝ^n/F_j satisfy
θ_jθ_i^-1(F_i x) = F_jy ,
then there is a transformation ϕ∈𝔽 such that
ψ x = y
and θ_j^-1θ_i(F_i z) = F_j(ψ z)
for all z in a neighborhood of x .
The family θ_i_i∈ℐ is called an atlas.
Clearly, an 𝔽-orbifold is an 𝔽-manifold if and only if
each F_i is the trivial group F_i=.
If is a crystallographic group that tiles ℝ^n with a convex polytope Π,
then ℝ^n/ is a -orbifold. At each point i=(x), the group F_i
is the stabilizer Stab(x).
This lemma is folklore in geometry—see e.g., <cit.> for results that are phrased differently but amount to the same. We give a proof here only to match our
specific choices of definitions to each other.
Let Π̃ be a transversal. We choose ℐ=ℝ^n/, so each i∈ℐ is the orbit (x)
of some point in ℝ^n, and hence of a unique point x∈Π̃.
By <ref>, there is hence a map θ_x with θ_x((x))=x that isometrically
maps a ball B_d_((x),ε) with suitable radius to
B_d_n(x,ε)/ Stab(x). We hence set F_i=(x), which is a finite subgroup
of the discrete group , and hence discrete. What remains to be shown is the coherence property.
Suppose x and y are points in ℝ^n with trivial stabilizers. If
θ_yθ_x^-1(x)=y, then x and y are on the same orbit, so there is indeed a map ψ∈
with ψ x=y. The coherence property then follows by the same argument as for metric manifolds above.
If the stabilizers are non-trivial, the same holds if points are substituted by their orbits under stabilizers.
Consider again the triangle Π and rotation ϕ in <ref>.
Here, the stabilizer of the center of rotation x is (x)=,ϕ,ϕ^2.
The metric ball around the point i=(x) on the orbifold (the tip of the cone) is a smaller cone:
[>=stealth]
[xshift=0cm,yshift=-.75cm]
[
left color=gray!50!white,
right color=gray!50!white,
middle color=white,
shading=axis,
opacity=0.25
]
(1,0) – (0,3) – (-1,0) arc (180:360:1cm and 0.4cm);
(-1,0) arc (180:360:1cm and 0.4cm) – (0,3) – cycle;
[
left color=gray!50!black,
right color=gray!50!black,
middle color=gray!10!white,
shading=axis,
opacity=0.25
]
(.34,2) – (0,3) – (-.34,2) arc (180:360:.34cm and 0.15cm);
[dotted] (-1,0) arc (180:0:1cm and 0.4cm);
[circle,black,scale=.3,fill,draw] (tip) at (0,3) ;
at ((tip)+(0,.3)) i=(x);
[xshift=5cm]
(origin) at (0,0) ;
(a) at ((origin)+(90:2)) ;
(b) at ((origin)+(210:2)) ;
(c) at ((origin)+(330:2)) ;
[thick,fill=gray!20!white] (a.center)–(c.center)–(origin.center)–cycle;
[domain=90:-30,fill,gray!60!white] (origin) – plot (.5*cos(), .5*sin()) – (origin) –cycle;
[thick] (a.center)–(c.center)–(origin.center)–cycle;
[circle,draw,fill,scale=.3,black] at (origin) ;
[circle,draw,fill,scale=.3,black] at (a) ;
[circle,draw,fill,scale=.3,black] at (c) ;
at ((origin)+(0,-.3)) x;
[->] ((tip)+(.4,0)) to[bend left] ((origin)+(-.3,.3));
at ((tip)+(3.2,-.1)) θ_i;
[xshift=11cm]
(origin) at (0,0) ;
(a) at ((origin)+(90:2)) ;
(b) at ((origin)+(210:2)) ;
(c) at ((origin)+(330:2)) ;
[thick,fill=gray!20!white] (a.center)–(b.center)–(origin.center)–cycle;
[thick,fill=gray!20!white] (c.center)–(b.center)–(origin.center)–cycle;
[thick,fill=gray!20!white] (a.center)–(c.center)–(origin.center)–cycle;
[domain=0:360,fill,gray!60!white] (origin) – plot (.5*cos(), .5*sin()) – (origin) –cycle;
[thick] (a.center)–(b.center)–(origin.center)–cycle;
[thick] (c.center)–(b.center)–(origin.center)–cycle;
[thick] (a.center)–(c.center)–(origin.center)–cycle;
[circle,draw,fill,scale=.3,black] at (origin) ;
[circle,draw,fill,scale=.3,black] at (a) ;
[circle,draw,fill,scale=.3,black] at (b) ;
[circle,draw,fill,scale=.3,black] at (c) ;
at ((origin)+(0,-.3)) x;
[<->] (7,.5)–(9,.5) node[pos=.5,label=above:isomorphic in
ℝ^n/(x)] ;
Its image under θ_i can be identified with the intersection of Π with a Euclidean ball around x.
Since Π and its image (x)Π under the stabilizer—the equilateral triangle on the right—are
indistinguishable in ℝ^n/(x), that corresponds to the quotient of a metric ball in the plane.
§.§ Path metrics
An orbifold as defined above is a topological space.
To work with the gluing results stated below, we must know it is also a metric space, and that this space is complete.
<ref> shows that that is true. Before we state the fact, we briefly describe how to
construct the relevant metric, which is the standard metric on orbifolds. Our definition is again adapted from that
of <cit.>. <cit.> offers an accessible introduction to this type of metric.
Intuitively, the
metric generalizes the geodesic on a smooth surface, by measuring the length of the shortest curve between two points.
Formally, a curve connecting two points ω_1 and ω_2 in M is a continuous function
γ:[a,b]⊂ℝ→ X
such that γ(a)=ω_1 and γ(b)=ω_2 .
To define the length γ of γ, first suppose ω_1 and ω_2 are in the
same set U_i, and define
γ := sup _j≤ kd_F_i(θ_i∘γ(t_j-1),θ_i∘γ(t_j))) | a=t_0<t_1<…<t_k=b for k∈ℕ ,
that is, the supremum is taken over the sequences (t_0,…,t_k).
In words: For each t_j∈[a,b], the point γ(t_j) lies on the curve γ in M. By choosing
a sequence t_0,…,t_k as above, we approximate the curve by k line segments (γ(t_j-1),γ(t_j)),
and then approximate the length of γ by summing the lengths of these segments. Since each line segment lies
in M, and we have no tool to measure distance in M, we map each point γ(t_j) on the curve
to a point θ_i(γ(t_j)) in ℝ^n/F_i, where we know how to measure distance using d_F_i.
We then record the length of the piece-wise approximation as the sum of lengths of the segments.
The length γ is the supremum over the lengths of all such approximations.
If there is no set U_i containing both points, one can always subdivide [a,b] into finitely many segments
[t_j-1,t_j] such that every pair γ(t_j-1) and γ(t_j) of consecutive points is in
in some set U_i (see <cit.>). One then defines
γ := _i≤ kγ|_[t_i-1,t_i] ,
and it can be shown that γ does not depend on the choice of subdivision.
[<cit.> Lemma 1 of 13.2, Theorems 13.2.7 and 13.3.8]
If M is an 𝔽-orbifold, any two points in M can be connected by a curve of finite length.
The function
(ω_1,ω_1) := infγ | γ is a curve connecting ω_1 and ω_2 in M
is a metric on the set M, and metrizes the Hausdorff topology of M.
The metric space so defined is complete.
§.§ Orbifolds constructed by abstract gluing
Let S_1,…,S_k be the facets of Π. A side pairing is a finite set
𝒮=ψ_1,…,ψ_k of isometries of ℝ^n if, for each
i≤ k, there is a j≤ k such that
(i) ψ_i(S_j)=S_i
(ii) ψ_i = ψ_j^-1 (iii) Π∩ψ_iΠ = S_i .
The definition permits i=j. A crystallographic group is determined by a side pairing:
[<cit.> Theorem 7.11]
If a crystallographic group tiles with a convex polytope Π, the tiling is exact,
and 𝒮 is a side pairing for Π and , the group
generated by 𝒮 is .
The side pairing defines an equivalence relation
≡ on points x,y∈Π, namely
x ≡ y
:⟺ ψ_i x=y for some i≤ k .
Let M be the quotient space M:=Π/≡, equipped with the quotient topology, that is,
A⊂ M open :⇔ x∈Π| equivalence class of x is in A is open set in Π
We then refer to M as the quotient obtained by abstract gluing from
Π and 𝒮.
We will be interested in a specific type of side pairing, called a subproper side pairing.
The precise definition is somewhat involved, and can be found in 13.4 of <cit.>.
We omit it here, since we will see that all side pairings
relevant to our purposes are subproper.[<cit.> Theorem 13.4.2]
Let 𝔽 be a group of isometries of ℝ^n and Π a convex polytope.
Let M be the metric space obtained by abstract gluing from Π and a subproper 𝔽-side pairing.
Then M is an 𝔽-orbifold. The natural inclusion
Π^∘↪ M, i.e., the map that takes each point x∈Π^∘ to its ≡-equivalence
class, is continuous.
For the next result, recall the definition of d_ from <ref>. We define a metric d_𝕊 for a group 𝕊
analogously, by substituting 𝕊 for .
[<cit.> Theorem 13.5.3]
Let M be the orbifold in <ref>, and 𝕊 be the group generated by all maps in the
side pairing. If M is a complete metric space,
the natural inclusion map Π↪ M induces an isometry from M to
(ℝ^n/𝕊,d_𝕊).
The final result on orbifolds we need gives a precise statement of the idea that the set
of points around which an orbifold does not resemble a manifold
is small. The next definition characterizes those points around which the manifold
property breaks down as having order >1:
Consider a point z∈ M. Then we can find some x∈ℝ^n that corresponds to z: We know that
z∈ U_i for some i, and hence ϕ_iz= F_i x in the quotient space ℝ^n/F_i.
The order of z∈ M is the number of elements of F_i that leave x invariant (formally, the order
of the stabilizer of x in F_i). It can be shown that this number does not depend on the choice of i, so each
z∈ M has a uniquely defined order.
[<cit.> Theorem 13.2.4]
If M is an 𝔽-orbifold, the set of points of order 1 in M
is an open dense subset of M. The set of points of order >1 is nowhere dense.
§.§ Topological dimension
The notion of dimension we have used throughout is the algebraic dimension A of a set A in a vector space
(see <ref>). For the proof of the embedding theorem, we also need another notion of dimension that
does not require vector space structure, known
variously as topological dimension, covering dimension, or Lebesgue dimension. The definition is slightly more involved:
Consider a topological space X. An open cover of X is a collection 𝒜 of open sets in X that
cover X, that is, each point of X is in at least one of the sets. The order of an open cover is
order(𝒜)
:= sup number of elements of 𝒜 containing x | x∈ X .
The topological dimension X of X is the smallest value m∈ ℕ∪∞
such that, for every open covering ℬ of X, there is an open covering 𝒜
with order(𝒜)=m+1 such that every set of ℬ contains a set of 𝒜.
[<cit.>]
The topological dimension of Euclidean space equals its algebraic dimension, ℝ^n=ℝ^n=n,
and any closed metric balls B⊂ℝ^n has B=n.
In general, however, the topological dimension of a set A⊂ℝ^n may differ from its dimension
A as defined in <ref> (as the algebraic dimension of the linear hull), and even the proof that
ℝ^n=n is not entirely trivial. <cit.> provides a readable overview.
The reason why topological dimension is of interest in our context is the following classical result.
Recall that, given topological spaces X and Y, an embedding of X into Y is an injective
map X→ Y that is a homeomorphism of X and its image.
[<cit.>]
Every compact metrizable space X with X<∞ can be embedded into ℝ^2 X+1.
We also collect two additional facts for use in the proofs. Recall that a function is called closed if
the image of every closed set is closed.
[<cit.> and <cit.>]
(i) If X is a topological space and Y_1,…,Y_k are closed and finite-dimensional subspaces, then
X = Y_1∪…∪ Y_k
implies X =max_i Y_i .
(ii)
Let f:X→ Y be a continuous, closed and surjective map between metric spaces.
If |f^-1y|≤ m+1 for some m∈ℕ∪0 and all y∈ Y, then
X
≤ Y
≤ X + m .
§ BACKGROUND IV: SPECTRAL THEORY
The proof of the Fourier representation draws on the spectral theory of linear operators,
and we now review the relevant facts of this theory.
We are interested in an operator A (think -Δ) defined on a space
V (think _̋) which is contained in a space W (think Ł_2).
If V approximates Ł_2 sufficiently well, and if A is self-adjoint on V,
a general spectral result guarantees the existence of an orthonormal basis for Ł_2 consisting of eigenfunctions
(<ref>).
To apply the result to the negative Laplacian, we must extend Δ to an operator on _̋
(since Δ is defined on twice differentiable functions, and elements of _̋ need not
be that smooth). <ref> shows that is possible.
Once we have obtained the eigenfunctions, there is a generic way to show they are smooth
(<ref>).
§.§ Spectra of self-adjoint operators
Spectral decompositions of self-adjoint operators have been studied widely,
see <cit.> for sample results.
We use the following formulation, adapted from Theorem 2.37 and Corollary 2.38 of <cit.>.
[Spectral decomposition <cit.>]
Let be a polytope, and V a closed subspace of ^̋1(). Require that the inclusion maps
V ↪ Ł_2() ↪ V^*
are both continuous and dense,
and the first inclusion is also compact.
Let A:V→ V^* be a bounded linear operator that is self-adjoint on V and satisfies
A f,f_V ≥ c_Vf^2_V-c_Lf^2_Ł_2 for some c_V,c_L>0 and all f∈ V .
Then there is a countable number of scalars
λ_1≤λ_2≤… with λ_i ∞
and functions ξ_1,ξ_2,…∈ V such that
Aξ_i = λξ_i for all i∈ℕ .
The functions ξ_i form an orthonormal basis for V. For each v∈ V,
_i≤ mλ_iv,ξ_iξ_i A v
holds in the dual V^*. If A is also strictly positive definite,
then λ_1>0.
If the inclusions in (<ref>) are continuous and dense,
Ł_2 is called a pivot space for V.
See Remark 3 in Chapter 5 of <cit.> for a discussion of pivot spaces.
§.§ Extension of Laplacians to Sobolev spaces
Recall that the Laplace operator Δ on a domain Γ is defined on twice continuously
differentiable functions. It
can be extended to a continuous linear operator on ^̋1(Γ^∘), provided
the geometry of Γ is sufficiently regular. That is the case if Γ
is a Lipschitz domain, which loosely speaking means
it is bounded by a finite number of Lipschitz-smooth surfaces. Since a precise definition (which can be found in
<cit.>) is rather technical, we omit details and only note that every polytope is a Lipschitz domain
<cit.>.
Let Γ be a Lipschitz domain, and denote by ^̋1(Γ^∘)^* the dual space of
^̋1(Γ^∘). There is a unique linear operator
Λ:^̋1(Γ^∘)→^̋1(Γ^∘)^* that extends the Laplace
operator. This operator is bounded on ^̋1(Γ^∘).
§.§ Smoothness of eigenfunctions
One hallmark of differential operators is that their eigenfunctions tend to be
very smooth. The sines and cosines that make up the standard Fourier basis on
the line are examples. Intuitively, that is due to the fact that
the Laplacian is a second-order differential operator, and “removes two orders of
smoothness”: If Δ f is in ^k, then f must be in
^k+2. Since an eigenfunction appears on both sides of the spectral equation
-Δξ = λξ ,
one can iterate the argument: If ξ is in , it must also be in ^2, hence also in ^4, and so forth.
This argument is not immediately applicable to the functions ξ constructed in
<ref> above, since it does not guarantee the functions to be in ^2. It
only shows they are in V, which in the context of differential operators
(and specifically in the problems we study) is typically a Sobolev space.
Under suitable conditions on the domain, however, one can show that argument above
generalizes to Sobolev space, at least on certain open subsets. The following version
is again adapted to our problem from a more general result.[<cit.>]
Let Π be a polytope and M an open set such that M⊂Π^∘. Let
Λ be the extension of the Laplace operator guaranteed by <ref>.
Suppose f∈^̋1(M) and k∈ℕ∪0.
Then Λ f=g on M for g∈^̋k(M) implies f∈^̋k+2(M).
§ PROOFS I: THE FLUX PROPERTY
This and the following two sections comprise the proof of <ref>, the Fourier representation.
In this section, we prove the flux property of <ref>.
§.§ Tools: Subfacets
We next introduce a simple geometric tool to deal with non-exact tilings:
<ref> assumes the tiling is exact, but the the flux property and the Fourier
representation make no such assumption. Although they do not use a gluing construction explicitly, they use the periodic
boundary condition (<ref>), which matches up points on the boundary
∂Π as gluing does. Absent exactness, that requires dealing
with parts of facets. We call each set of the form
σ := (Π∩ϕΠ)^∘ for some ϕ∈ and σ≠∅
a subfacet of Π. Let Σ be the (finite) set of subfacets of Π.
Whereas the division of ∂Π into facets is a property
of the polytope that does not depend on , the subfacets are a property of the tiling.
Consider an edge of a rectangle Π⊂ℝ^2.
Suppose ϕ is a 180^∘ rotation around the center x of the edge, as shown on the left:
[mybraces,>=stealth]
[circle,fill,scale=.4] (a) at (0,0) ;
[circle,fill,scale=.3] (b) at (3,0) ;
[circle,fill,scale=.4] (c) at (6,0) ;
(a)–(c);
[-latex,gray] ((b)+(-1.5,.3)) arc [start angle=140,end angle=40,x radius=2cm,y radius=2cm];
[-latex,gray] ((b)+(1.5,-.3)) arc [start angle=-40,end angle=-140,x radius=2cm,y radius=2cm];
at ((b)+(0,-.3)) x;
[xshift=7.5cm]
[circle,fill,scale=.4] (a) at (0,0) ;
[circle,fill,scale=.3] (b) at (3,0) ;
[circle,fill,scale=.4] (c) at (6,0) ;
(a)–(c);
[brace,gray] ((a)+(0,.5))–((c)+(0,.5));
[mirrorbrace,gray] ((a)+(0,-.5))–((b)+(-.05,-.5));
[mirrorbrace,gray] ((b)+(0,-.5))–((c)+(.05,-.5));
at ((b)+(0,-.3)) x;
[gray] at ((b)+(0,.8)) facet;
[gray] at ((b)+(-1.5,-.8)) subfacet;
[gray] at ((b)+(+1.5,-.8)) subfacet;
Then ϕ maps the facet to itself, and maps the point x to itself, but no other point is fixed. In this
case, x divides the interior of the facet into two subfacets (right). If ϕ is instead a reflection
about the same edge, each point on the edge is a fixed point, and the entire interior of the edge is a
single subfacet. Another example of a subfacet is the edge segment marked in <ref>/left.
The subfacets are convex (n-1)-dimensional open subsets of ∂Π, and their closures cover ∂Π.
In particular,
_n-1(σ)>0 for all σ∈Σ and _σ∈Σ_n-1(σ)=_n-1(∂Π) .
Each subfacet is mapped by to exactly
one subfacet, possibly itself: For each σ∈Σ,
ϕ_σ(σ) ∈ Σ for one and only one ϕ_σ∈∖ ,
where ϕ_σ(σ)=σ if and only if σ contains a fixed point of ϕ_σ.
Each subfacet is by definition of the form σ=(Π∩ψΠ)^∘, for some ψ∈.
Since (Π) is a tiling, Π and ψΠ are the only tiles intersecting σ.
We hence have
σ∩ϕ_σ^-1Π ≠ ∅ for one and only one ϕ_σ∈∖ ,
namely for ϕ_σ=ψ^-1.
Since the set Π∩ϕ_σ^-1Π is the intersection of two facets, and hence of two convex sets, it is convex.
By the definition of subfacets, its relative interior σ is non-empty. That makes σ a
(n-1)-dimensional, convex, open subset of ∂Π.
Volumes.
Since σ is open in n-1 dimensions,
_n-1(σ)>0 .
The definition of a tiling implies
each boundary point x∈∂Π is on the facet of some adjacent tile ϕΠ. It follows that
∂Π = _ϕ∈∖Π∩ϕΠ = _σ∈Σσ .
Since distinct subfacets do not intersect, applying volumes on both sides shows
_n-1(∂Π) = _σ∈Σ_n-1(σ) .
Each subfacet maps to exactly one subfacet.
We have already noted that σ intersects only the tiles Π and ϕ_σ^-1Π.
Since ϕ_σ^-1Π is adjacent to Π, so is ϕ_σΠ.
That implies ϕ(σ)=(Π∩ϕ_σΠ)^∘, and hence
ϕ_σ(σ)∈Σ and σ∩ϕ^-1Π = ∅ if ϕ≠,ϕ_σ .
Thus, σ maps to ϕ_σ and vice versa, and neither maps to any other subfacet.
Fixed points.
We know that σ and ϕ_σ(σ) are either identical or disjoint.
Suppose first that σ≠ϕ_σ(σ). Then
ϕ_σ^-1Π ≠ ϕ_σΠ and hence ϕ_σ(σ)∩σ=∅ ,
so ϕ_σ has no fixed points in σ.
On the other hand, suppose ϕ_σ(σ)=σ.
Then the restriction of ϕ_σ to the closure σ̅ is a continuous map
σ→σ from a compact convex set to itself.
That implies, by Brouwer's theorem <cit.>,
that the closure σ contains at least one fixed point,
and we only have to ensure that at least one of these fixed points is in the interior σ. But if
the boundary ∂σ contains fixed points and σ does not, then ϕ_σ(σ)≠σ since
ϕ_σ is an isometry, which contradicts the assumption. In summary, we have shown that
ϕ_σ(σ)=σ if and only if σ contains a fixed point.
§.§ Proof of the flux property
To establish the flux property in <ref>, we first show how
the normal vector _Π of the boundary of a tile Π transforms
under elements of the group .
If a crystallographic group tiles ℝ^n with a convex polytope Π, then
A_ϕ_(x) = -_(ϕ x) whenever x,ϕ x∈Π .
If ϕ is a tile adjacent to , its normal vector _ϕ satisfies
_(y) = -_ϕ(y) if y∈∩ϕ .
Since x∼ϕ x holds, x is on at least one facet S of Π, and ϕ x is hence
on the facet ϕ S of ϕΠ. If is a normal vector of S (exterior to Π),
then A_ϕ is a normal vector of ϕ S (exterior to ϕΠ). That shows
_ϕ(ϕ x)
=
A_ϕ_(x)
if x,ϕ x∈Π .
In summary, we hence have A_ϕ_(x) = -_(y) whenever x and ϕ x are both in Π.
Let σ be a subfacet. Since _ is constant on σ, we define the vectors
_(σ) := _(x) for any x∈σ and
I(σ) := _σF(x)_n-1(dx) .
By <ref>, the subfacets cover ∂Π up to a null set.
We hence have
_∂F(x)^_(x)_n-1(dx)
= _σ∈𝒮_σF(x)^_(x)_n-1(dx)
= _σ∈𝒮(σ)^I(σ) .
We must show this sum vanishes.
If ϕσ∈, the ϕ-invariance of _n-1 and condition (<ref>) imply
I(ϕσ)
= _ϕσF(x)_n-1(dx)
= _σF(ϕ(x))_n-1(dx)
=
A_ϕ_σF(x)_n-1(dx)
=
A_ϕI(σ) .
<ref> implies A_ϕ_(σ)=-_(ϕσ) for ϕσ⊂∩ϕ.
That shows
_(ϕσ)^I(ϕσ)
=
-(A_ϕ_(σ))^A_ϕI(σ)
=
-_(σ)^A_ϕ^A_ϕI(σ)
=
-_(σ)^I(σ) ,
since A_ϕ^=A_ϕ^-1. It follows that
_(σ)^I(σ)
+ _(ϕσ)^I(ϕσ)
= 0
and even _(σ)^I(σ) = 0 if σ=ϕσ .
By <ref>, the set Σ of subfacets can be sorted into pairs (σ,ϕ_σσ) such that no
subfacet occurs in more than one pair (though σ=ϕ_σσ is possible).
It follows that
_σ∈Σ_(σ)^ I(σ)
= 12_σ∈Σ(_(σ)^ I(σ)+_(ϕ_σσ)^ I(ϕ_σσ))
= 0
as we set out to show.
§ PROOFS II: THE LAPLACIAN AND ITS PROPERTIES
The purpose of this section is to prove <ref>.
We use the flux property
to show that the symmetries imposed by a crystallographic group simplify
the Green identity considerably. We can then use this symmetric form of the Green identity
to show Λ has the desired properties.
§.§ Green's identity under crystallographic symmetry
That the extended Laplace operator is self-adjoint on _̋ for any crystallographic group
derives from the fact that the symmetry imposed by the group makes the correction
term in Green's identity vanish. That enters in the proof of <ref>
via the two identities in the following lemma. The first one is Green's identity under symmetry;
the second shows that a similar identity holds for the Sobolev inner product.
If a crystallographic group tiles ℝ^n with a convex polytope Π, the
negative Laplace operator satisfies the identities
-Δ f,h_Ł_2 =
a(f,h)
-Δ f,h_^̋1 =
a(f,h)
+ _i≤ na(∂_i f,∂_i h)
for all functions f and h in ℋ.
Let f̅ and h̅ be the unique continuous extensions of f and h to the closure Π,
and set F:=f̅∇h̅. Since f̅ and h̅ satisfy the periodic boundary condition,
(<ref>) shows
F(ϕ x)
= h̅(ϕ x)∇f̅(ϕ x)
= h̅(x)A_ϕ∇f̅(x)
=
A_ϕ F(x) .
By the flux property (<ref>), we hence have
_∂Π(∂__Πf̅)h̅ = _∂Π_Π^F
=
0 ,
and substituting into Green's identity (<ref>) shows
Δ f,h_Ł_2 =
a(f,h)
-
_∂Π(∂__Πf)h
=
a(f,h) ,
so (<ref>) holds. Now consider (<ref>).
Since f has three continuous derivatives, we have
∂_i^2∂_j f
= ∂_j
∂_i^2 f
and hence Δ(∂_jf)
= ∂_j(Δ f) .
The ^̋1-product can then be written as
Δ f,h_^̋1 = Δ f,h_Ł_2+
_i∂_i(Δ f),∂_ih_Ł_2 = Δ f,h_Ł_2+
_iΔ(∂_if),∂_ih_Ł_2 .
Substituting the final sum into Green's identity shows
_iΔ(∂_if),∂_ih_Ł_2 = _ia(∂_if,∂_ih)
+ ∫_∂Π_i(∂__Π∂_i f)∂_ih
Since ∇(∂_if) is precisely the ith row vector of the Hessian Hf, the integrand is
_i(∂__Π∂_i f)∂_ih
= _i(_Π^(∇∂_i f)∂_ih
= _Π^(Hf)∇ h .
Consider the vector field F(x):=(Hf)∇ h. By <ref>, F transforms as
F(ϕ x)
=
Hf(ϕ x)∇ h(ϕ x)
=
A_ϕ· Hf(x)· A_ϕ^A_ϕ∇ h(x)
=
A_ϕ· F(x) ,
and hence satisfies (<ref>).
Another application of the flux property then shows
_iΔ(∂_if),∂_ih_Ł_2 = _ia(∂_if,∂_ih) .
Substituting this identity and (<ref>) into the ^̋1-product above yields (<ref>).
§.§ Approximation properties of the space _̋
That we can use the space _̋ to prove results about continuous and Ł_2-functions
relies on the fact that such functions are sufficiently well approximated by elements of _̋, and that _̋ can in turn be approximated by useful dense subsets.
We collect these technical facts in the following lemma.
Consider the space of functions
_ pbc(Π^∘) = f|_Π^∘ | f∈_
which we equip with the supremum norm. These are precisely those
uniformly continuous functions on the interior Π^∘ whose unique
continuous extension to Π satisfies the periodic boundary conditions.
Note that we can then express the definition of ℋ in (<ref>)
as
ℋ = _ pbc(Π^∘)∩^∞(Π^∘) .
If is crystallographic and tiles with Π, the inclusions
ℋ ι_1 _̋ ι_2 Ł_2(Π^∘)
ι_3 _̋^* ,
are all dense, ι_2 and ι_3 are continuous, and ι_2 is compact. Moreover,
if ℱ⊂_̋∩_ pbc(Π^∘) is dense in _̋,
it is also dense in _ pbc(Π^∘) in the supremum norm.
When we take closures in the proof, we write
ℱ^ sup and ℱ^ ^̋1 to indicate the norm
used to take the closure of a set ℱ.
That ℋ is dense in _̋ holds by definition, see (<ref>).
Inclusions ι_2 and ι_3 are dense and continuous.
Denote by _c^∞:=_c()∩^∞() the set of compactly supported and infinitely
differentiable functions on . Denote by
_̋0^1 := _c^∞^^̋1
its ^̋1-closure. This is, loosely speaking, the Sobolev space of functions that vanish on the boundary
<cit.>, and it is a standard result that
_̋0^1
⊂ Ł_2(Π^∘)
⊂
(_̋0^1)^* ,
where both inclusion maps are dense and bounded <cit.>.
Consider any f∈_c^∞. Since f is uniformly continuous, it has a unique continuous
extension f̅ to Π. This extension satisfies f̅=0 on the boundary ∂Π.
(This fact is well known <cit.>, but also easy to verify: Since the support of f
is a closed subset of the open set Π^∘, each point x on the boundary is the center
of some open ball B that does not intersect the support, so f̅=0 on ∩ B.)
It therefore trivially satisfies the
periodic boundary condition (<ref>), which shows _c^∞⊂ℋ.
Taking _̋1-closures shows ^̋1_0⊂_̋. We hence have
_̋0^1(Π^∘)
⊂ _̋ ⊂ ^̋1(Π^∘)
⊂ Ł_2(Π^∘)
⊂ ^̋1(Π^∘)^*
⊂ _̋^*
⊂ _̋0^1(Π^∘)^* .
Since _̋0^1↪Ł_2 and Ł_2↪(_̋0^1)^* are both
dense and bounded, _̋↪Ł_2 and
Ł_2↪_̋^* are dense and bounded (and hence continuous), and _̋↪^̋1
is bounded (and hence continuous).
Inclusion ι_2 is compact. We can decompose ι_2 as
_̋ ↪ ^̋1
↪ Ł_2 .
It is known that ^̋1↪Ł_2 is compact <cit.>. If one of two inclusions is compact, their composition is compact (see <cit.>, or simply note that
any bounded sequence in _̋ is also bounded in ^̋1). That shows _̋↪Ł_2 is compact.
ℱ is dense in _ pbc.
We know from <ref> that ^̋1(Π^∘)⊂(Π^∘),
and hence h_^̋1≥h_sup for all h∈(Π^∘).
In other words, the sup-closure of the ^̋1-closure is the sup-closure, so
ℱ^ sup = (ℱ^^̋1)^ sup = _̋^ sup = (ℋ^^̋1)^ sup = ℋ^ sup .
It hence suffices to show ℋ is dense in _ pbc.
To this end, we use a standard fact: If we consider the closed set Π instead of the interior,
^∞(Π) is dense in (Π), since Π is compact. (One way to see this is that ^∞ contains all
polynomials, which are dense in (Π) by the Stone-Weierstrass theorem <cit.>.)
Since _ pbc(Π) is a closed linear subspace of (Π), it follows that
_ pbc(Π)∩^∞(Π)
is dense in _ pbc(Π)∩(Π) = _ pbc(Π) .
Consider a function f∈_ pbc(Π^∘). Then f has a unique continuous extension
f̅ to Π, which satisfies the periodic boundary condition. That shows that
f↦f̅ is an isometric isomorphism _ pbc(Π^∘)→_ pbc(Π) ,
since the extension is unique and does not change the supremum norm. If f is also infinitely differentiable
(and hence in ℋ), then f̅ is infinitely differentiable, so the same map is also an isometric
isomorphism
ℋ = _ pbc(Π^∘)∩^∞(Π^∘)
→ _ pbc(Π)∩^∞(Π) .
In summary, we hence have
ℋ _ pbc(Π)∩^∞(Π)
dense _ pbc(Π)
_ pbc(Π^∘) ,
and since isomorphisms preserve dense subsets, ℋ is indeed dense in _ pbc(Π^∘).
§.§ Existence and properties of the Laplacian
Since Π is a convex polytope, it is a Lipschitz domain, and Δ hence extends to
a bounded linear operator Λ on ^̋1(Π^∘), by <ref>.
The restriction of Λ to the closed linear subspace of _̋
is again a bounded linear operator that extends Δ.
It remains to verify self-adjointness and (<ref>) on _̋.
Since Λ is bounded and hence continuous,
it suffices to do so on the dense subset ℋ.
For (<ref>i), that has already been established in <ref>.
To show (<ref>ii), we note (<ref>) implies
f^2_^̋1 = f,f_^̋1 = f,f_Ł_2 + a(f,f) for f∈ℋ
and hence
a(f,f) = f^2_^̋1 - f^2_Ł_2 .
Since f∈ℋ and hence Λ f=Δ f, we can substitute into (<ref>), which shows
-Δ f,f_^̋1 =
a(f,f) + _i≤ na(∂_if,∂_if)
≥ f^2_^̋1 - f^2_Ł_2
where the last step uses the fact that a is positive semi-definite by (<ref>).
That proves coercivity. Since the bilinear form a is symmetric, (<ref>) also shows
-Δ f,h_^̋1 =
a(f,h)
+ _i
a(∂_i f,∂_i h)
=
a(h,f)
+ _i
a(∂_i h,∂_i f)
= -Δ h,f_^̋1
on ℋ, so Λ is self-adjoint on _̋.
§ PROOFS III: FOURIER REPRESENTATION
We now prove the Fourier representation. We first restrict all function to a single
tile Π. By <ref>,
we can then choose the space V in the spectral theorem (<ref>) as _̋.
Since we also know the Laplacian is self-adjoint on _̋, we can use the
spectral theorem to obtain an eigenbasis. We then deduce <ref> by extending the representation
from Π to the entire space ℝ^n.
§.§ Proof of the Fourier representation on a single tile
The eigenvalue problem (<ref>) in <ref> is defined on the
unbounded domain ℝ^n. We first restrict the problem to the compact domain Π, that is, we consider
-Δ h = λ h on Π^∘
subject to h(x) = h(y) whenever x∼ y on ∂Π .
That allows us to apply <ref> and <ref> above, which hold on compact domains.
(The deeper relevance of compact domains is that function spaces on such domains
tend to have better approximation properties than on unbounded domains.)
The restricted version of <ref> we prove first is as follows.
Let be a crystallographic group that tiles ℝ^n with
a convex polytope Π. Then (<ref>)
has solutions for countably many distinct values of λ, which satisfy
0 = λ_1 < λ_2 < λ_3<… and λ_i ∞ .
Each solution h is infinitely often differentiable on Π^∘.
There exists a sequence of solutions h_1,h_2,… that is an orthonormal
basis of Ł_2(Π), and satisfies
| j∈ℕ | h_j solves (<ref>) for λ_i| = k(λ_i) .
In the proof, we again use the notation M^Ł_2 and M^^̋1 to indicate
the norm used to take the closure of a set M.
We apply the spectral decomposition result (<ref>), with A=Λ and V=_̋.
We have already established its conditions are satisfied (except for
the optional assumption of strict positive definiteness):
By <ref>, Λ exists, is a bounded and
self-adjoint linear operator on _̋, and satisfies (<ref>). By <ref>,
_̋ approximates Ł_2() in the sense of (<ref>).
<ref> hence shows that there is an orthonormal basis of eigenfunctions for _̋,
i.e., functions ξ_1,ξ_2,… that satisfy
(i) Λξ_i = λ_iξ_i
(ii) ξ_i,ξ_j_^̋1 = δ_ij (iii) spanξ_1,ξ_2,…^^̋1 = _̋ .
What remains to be shown are the properties of the eigenvalues and eigenfunctions, and
that the ONB of _̋ can be translated into an ONB of Ł_2.
Non-negativity of eigenvalues.
The operator Λ is positive semi-definite, but not strictly positive definite, on V.
To show this, it again suffices to consider -Δ on ℋ.
By (<ref>), we have
Λ f,f_^̋1 =
a(f,f)
+ _i
a(∂_i f,∂_i f)
= ∇ f(x)^2_ℝ^n(dx)
+ _i
∇∂_i f(x)^2_ℝ^n(dx)
≥ 0 .
That shows Λ is positive semi-definite. Now consider, for any ε>0, the operator
Λ_ε:
_̋→_̋ defined by Λ_ε f := Λ f+ε f .
This is operator is still bounded, coercive and self-adjoint, so
<ref> is applicable. Clearly, Λ has the same eigenfunctions
as Λ, with eigenvalues λ_i+ε. It is also strictly
positive definite, since
Λ_ε f,f_^̋1 = Λ f,f_^̋1
+
ε f,f_^̋1 ≥ εf_^̋1 .
It hence follows from <ref> that the smallest eigenvalue satisfies λ_1+ε>0.
Since that holds for every ε>0, we have λ_1≥ 0.
The smallest eigenvalue and its eigenspace.
If a function f is constant on , then
f∈_̋ and Λ f = -Δ f = 0 .
That shows the smallest eigenvalue is λ_1=0, and its eigenspace
ℋ(0) contains all constant functions. To show that it contains no other functions,
note that
Λ f,f = 0
and by (<ref>) hence ∇ f(x)^2=0 for almost all x∈ .
That implies f is piece-wise constant. Since the only piece-wise constant function
contained in ^̋1 are those that are strictly constant (see <cit.>), ℋ(0) is the set of
constant functions, and ℋ(0)=1.
Regularity of eigenfunctions.
We now use the strategy outlined in <ref>.
Let ξ be an eigenfunction. We have shown that implies
ξ∈_̋, and hence ξ∈^̋1(). Consider any x∈. Since the interior
is open, we can find ε>0 such that the open ball B=B_ε(x) of radius ε
centered at x satisfies B⊂.
The restriction ξ|_B of ξ to B then satisfies
f|_B_ε(x)∈^̋1(B) and Λ f|_B = λ f|_B .
Since ξ|_B appears on both sides of the equation,
<ref> implies that f|_B is also in ^̋1+2(B), hence also in ^̋1+4(B), and so forth, so ξ|_B∈^̋k(B) for all k∈ℕ.
<ref> then shows that ξ|_B is even in ^k(B) for each k∈ℕ,
and hence in ^∞(B). We have thus shown that ξ has infinitely many derivatives on
a neighborhood of each x∈, and hence that
ξ∈^∞().
Turning the Sobolev basis into an Ł_2 basis.
The functions ξ_i form an orthonormal basis of _̋, by (<ref>). To obtain an orthonormal basis
for Ł_2(), we substitute (<ref>) into (<ref>ii), and obtain
δ_ij = ξ_i,ξ_j_^̋1 = ξ_i,ξ_j_Ł_2 +
a(ξ_i,ξ_j)
= ξ_i,ξ_j_Ł_2 + Λξ_i,ξ_j_Ł_2 .
Since ξ_i is an eigenfunction, it follows that
δ_ij = ξ_i,ξ_j_Ł_2
+λ_iξ_i,ξ_j_Ł_2 and hence 11+λ_iξ_i,ξ_j_Ł_2 = δ_ij .
The functions h_i:=ξ_i/√(1+λ_i) then satisfy
-Δ h_i = λ_i h_i on Π^∘ and h_i,h_j_Ł_2 = δ_ij .
Since we have merely scaled the functions ξ_i, we also have
spanh_1,h_2,… = spanξ_1,ξ_2,… .
That implies
Ł_2-closure of spanh_1,h_2,… = Ł_2-closure of ^̋1-closure of spanh_1,h_2,…
= Ł_2-closure of _̋ ,
and since the inclusion _̋↪Ł_2() is dense by <ref>, we have
spanh_1,h_2,…^Ł_2 = Ł_2() .
In summary, we have shown that h_1,h_2,… is an orthonormal basis of Ł_2() consisting
of eigenfunctions of -Δ.
Extending the basis on Π^∘ to a basis on Π.
Each h_i is in ^∞(),
and hence has a unique continuous extension h̅_i to .
Since _n(∂Π)=0, we can isometrically identify
Ł_2(Π^∘) with Ł_2(Π): Under this identification, each function h_i on
the interior Π^∘ is equivalent to any measurable extension of h_i to Π,
so
spanh_1,h_2,…^Ł_2 = Ł_2(Π) .
The extended functions also satisfy
-Δh̅_i = λ_i h̅_i on Π and h̅_i,h̅_j_Ł_2(Π) = δ_ij ,
where the first identity extends from Π^∘ to Π by ^∞-continuity, and the
second holds since the boundary does not affect the integral. The functions h̅_i are hence eigenfunctions
of -Δ on Π, and form and orthonormal basis of Ł_2(Π).
§.§ Proof of the Fourier representation on ℝ^n
To deduce the theorem from <ref>, we must (1) extend the basis constructed on Π above to a basis
on ℝ^n, and (2) show that every continuous invariant function can be represented in this basis.
Consider the function h̅_i in the proof of <ref>. Recall each h̅_i
is infinitely smooth on Π and satisfies
the periodic boundary condition. It follows by (<ref>) that
e_i := h̅_i∘ p
is in _. Let Δ^k denote the k-fold application of Δ.
By <ref>, the fact that h̅_i satisfies the
periodic boundary condition (<ref>) implies that the continuous extension Δ h_i
also satisfies (<ref>). Iterating the argument shows that the same holds for the continuous
extension of Δ^k h_i for any k∈ℕ. We hence have
Δ^k e_i = Δ^k(h̅_i∘ p) = (Δ^k h_i)∘ p
and
(Δ^k h_i)∘ p∈_ for all k∈ℕ ,
so e_i has infinitely many continuous derivatives on ℝ^n. Since it is also -invariant,
it solves the constrained eigenvalue problem (<ref>) on ℝ^n.
That extends <ref> to ℝ^n.
It remains to be shown that a function f on ℝ^n is in _ if and only if
f=∑ c_i e_i for some sequence (c_i), where the series converges in the supremum norm.
Combining <ref> and (<ref>) shows that
h↦h̅∘ p
is an isometry _ pbc(Π^∘)→_ .
For any f:ℝ^n→ℝ, we hence have
f= c_i e_i
⟺
f|_Π^∘= c_i e_i|_Π^∘ .
In other words, we have to show that
h∈_ pbc(Π^∘)
⇔
h= c_ie_i|_Π^∘ and hence that _ pbc(Π^∘)=spane_i|_Π^∘ | i∈ℕ^ sup .
Since the proof of <ref> shows e_i|_Π^∘ | i∈ℕ is a rescaled orthonormal basis of _̋, and hence a subset of _̋∩_ pbc(Π^∘) that is dense in _̋, that holds by <ref>.
§ PROOFS IV: EMBEDDINGS
To prove <ref>, we first establish two auxiliary results on topological dimensions
of quotient spaces.
Recall from <ref> that ℝ^n/ is locally isometric to quotients of metric
balls. The first lemma considers the effect of taking a quotient on the dimension of a ball. The second lemma
combines this result with <ref> to bound the dimension of ℝ^n/.
Let B be an open metric ball in ℝ^n, and G a finite group of isometries of ℝ^n.
Then the quotient B/G has topological dimension
n
≤ (B/G)
<
n+|G| .
The quotient map q:B→ B/G is, by definition, continuous and
surjective. Recall that preimages of points under q are orbits: If ω∈ℝ^n/G
is the orbit G(x) of some x∈ℝ^n, then q^-1ω=G(x).
We show q is also closed: Let A⊂ B be a subset.
First observe that
qA closed ⇔
B/G∖ qA open ⇔
q^-1(B/G∖ qA) open,
by continuity of q.
This set can be expressed as
q^-1(B/G∖ qA)
=
B∖ q^-1qA
=
B∖(_ϕ∈ Gϕ A)
= _ϕ∈ Gϕ(B∖ A) ,
and is therefore open whenever A is closed, since each ϕ is an isometry and G is finite.
Consider any element ω∈ B/G. Then there is some x∈ B with ω=q(x), and
q^-1ω = ϕ x|ϕ∈ G and ϕ x∈ B which shows that
|q^-1ω| ≤ |G| .
<ref>(ii) is now applicable, and shows
B
≤ (B/G)
< B+|G| ,
and by <ref>(i), B=n.
Let be a crystallographic group that tiles ℝ^n with a convex polytope Π.
Then ℝ^n/ is a -orbifold, of topological dimension
n
≤ ℝ^n/ <
n
+ max_x∈Π|(x)|
Choose the index set in the orbifold definition as ℐ=Π. By
<ref>, we may then choose U_x=B_d_(q(x),ε),
the group H_x as (x), and the map
θ_x:B_d_(q(x),ε)→ B_d_n(x,ε)
as the isometry guaranteed by <ref>. That makes ℝ^n/
an orbifold. Isometry of the open balls also implies for the corresponding
closed balls of radius δ=ε/2 that
B_d_(q(x),δ)
is homeomorphic to B_d_(x,δ)/Stab(x)
for each x∈Π .
Since homeomorphic spaces have the same topological dimension, <ref>
shows
B_d_(q(x),δ)
= B_d_(x,δ)/Stab(x)
<
n+|Stab(x)| .
Since is crystallographic, the quotient space is compact, and
we can hence cover it with a finite number of the closed balls above.
Applying <ref>(i) then shows the result.
Let 𝒮 be the side pairing defined by for Π.
Since is by definition a discrete group of isometries, 𝒮 is subproper
(see <cit.>, 13.4, problem 2).
The gluing construction hence constructs a set M that is a -orbifold, according to
<ref>. By definition of M as a quotient, the gluing construction also defines a quotient map
q_M:Π→ M ,
which is continuous and surjective.
By <ref>, the quotient topology is metrized by . By <ref>,
the metric space (M,) is complete. It hence follows by <ref> that there exists
a isometry
γ_M:(M,)→(ℝ^n/𝕊,d_𝕊) ,
where 𝕊 is the group generated by 𝒮. In our case,
𝕊=𝒮_Π, and by <ref>, the generated group is 𝕊=𝔾.
That shows γ_M in fact an isometry
γ_M:(M,)→(ℝ^n/,d_) .
Since isometric spaces have the same topological dimension, <ref>
shows
M < n+max_x∈Π|(x)| .
By <ref>(ii) there is an embedding e:M→ℝ^N with N≤ 2(n+max|(x)|)-1.
Since is crystallographic, and ℝ^n/ hence compact, M and Ω:=e(M) are compact.
Using the restriction q:Π→ℝ^n/ of the quotient map to Π, we
can define
ρ_Π:Π→Ω as ρ_Π := e∘γ^-1∘ q .
By the properties of the constituent maps, ρ_Π is continuous and satisfies the periodic boundary condition (<ref>).
That makes ρ:=ρ_Π∘ p continuous and -invariant.
If h:ℝ^N→ Y is a continuous function, the composition f=h∘ρ is hence continuous and -invariant
on ℝ^n. Conversely, suppose f:ℝ^n→ Y is continuous and -invariant.
For each z∈Ω, the preimage ρ^-1z is precisely the orbit (x) of some x∈Π.
Since -invariant function are constant on orbits, the assignment
ĥ(z) := the unique value of f on the orbit ρ^-1z
is a well-defined and continuous function ĥ:Ω→ Y. Since Ω is compact,
ĥ has a (non-unique) continuous extension to a function h:ℝ^N→ Y, which
satisfies f=h∘ρ.
§ PROOFS V: KERNELS AND GAUSSIAN PROCESSES
§.§ Kernels
Suppose κ is invariant. For any f∈ℍ, (<ref>)
implies
f(ϕ x)
= f,κ(ϕ x,)_ℍ = f,κ(x,)_ℍ =
f(x) ,
so f is -invariant. Conversely, suppose all f∈ℍ are -invariant.
Let f_1,f_2,… be a complete orthonormal system. Then all f_i are -invariant,
so (<ref>) shows
κ(ϕ x,ψ y)
= _i∈ℕf_i(ϕ x)f_i(ψ y)
= _i∈ℕf_i(x)f_i(y)
= κ(x,y)
and κ is invariant. Suppose κ is also continuous.
If κ is invariant, its infimum and supremum on ℝ^n equal its infimum and supremum
on the compact set Π, and since κ is continuous, that implies it is bounded.
That shows all functions in ℍ are continuous <cit.>.
The main ingredient in the proof of <ref> is the following lemma,
which shows that the RKHS of κ is isometric to that of κ̂,
and that an explicit isometric isomorphism between them is given by composition with the embedding map ρ.
Let κ̂ be a continuous kernel on Ω with RKHS ℍ̂. Set
κ:=κ̂∘(ρ⊗ρ)
and ℍ:= RKHS defined by κ .
Then κ is a continuous kernel on ℝ^n, is -invariant in both arguments,
and ℍ⊂_. The map
I:ℍ̂→ℍ defined by f̂↦f̂∘ρ
is a linear isometric isomorphism, and two functions f̂ and ĝ in ℍ̂ are
orthogonal if and only if f̂∘ρ and ĝ∘ρ are orthogonal in ℍ.
The kernel κ is clearly continuous, since κ̂ and ρ are.
Since Ω is compact, κ̂ is bounded,
and since κ_sup=κ̂_sup, it follows that κ is bounded. Bounded continuity of κ implies
all elements of ℍ are continuous <cit.>.
That shows ℍ⊂_.
Next, consider the map I. Linearity of I is obvious. To show it is bijective, write
S:=spanκ(x,) | x∈ℝ^n and Ŝ:=spanκ̂(ω,) | ω∈Ω .
Note that makes ℍ the norm closure of S, and ℍ̂ the norm closure of Ŝ
(see <ref>).
Consider any f̂∈Ŝ. Then f̂=∑ a_iκ̂(ω_i,)
for some scalars a_i and points ω_i in Ω. Since ρ is surjective by
<ref>, we can find points x_i in
ℝ^n such that ω_i=ρ(x_i). It follows that
f = f̂∘ρ = ( a_iκ̂(ρ(x_i),))∘ρ = a_iκ(x_i,) ∈ S
and hence
I(Ŝ)⊂ S .
Reversing the argument shows I^-1(S)⊂Ŝ.
Thus, I is a linear bijection of Ŝ and S.
Substituting f̂∈Ŝ as above into the definition
of the scalar product shows
f̂,f̂_ℍ̂ = a_ia_jκ̂(ρ(x_i),ρ(x_j))
= a_ia_jκ(x_i,x_j)
= f,f_ℍ
and hence f_ℍ=f̂_ℍ̂ for all f̂∈Ŝ.
In summary, we have shown that the restriction of I to Ŝ is a bijective linear isometry
Ŝ→ S.
Since I is an isometry on a dense subset, it has a unique uniformly continuous extension
to the norm closure ℍ̂, which takes the norm closure ℍ̂ to
the norm closure ℍ of the image and is again an isometry
<cit.>.
By <ref>, there is a
unique continuous function
κ̂:Ω×Ω→ℝ that satisfies κ=κ̂∘(ρ⊗ρ) .
<ref> then implies all f∈ℍ are -invariant
and continuous.
We next show the inclusion is compact. Consider first the map I:f̂↦f̂∘ρ as
in <ref>, but now defined on the larger space (Ω). We know from
<ref> that I
is an isometric isomorphism (Ω)→_ (with respect to the
supremum norm). By <ref> its restriction to a map
ℍ̂→ℍ is also an isometric isomorphism (with respect to
the RKHS norms).
It follows that the inclusion maps
ι:ℍ→_ and ι̂:ℍ̂→(Ω)
satisfy ι=I∘ι̂∘ I^-1 .
Since κ̂ is a continuous kernel by step 1,
and its domain Ω is compact by <ref>,
the inclusion ι̂ is compact <cit.>.
The composition of a compact linear operator with any continuous linear
operator is again compact <cit.>.
Since I and its inverse are linear and continuous, that indeed makes ι compact.
Since κ̂ is a continuous kernel on a compact domain, Mercer's theorem
<cit.> holds for κ̂. It shows there
are functions f̂_1,f̂_2,… and scalars c_1≥ c_2≥…>0 such
that
(√(c_i)f̂_i)_i∈ℕ is an ONB for ℍ̂ and κ̂(ω,ω')=_ic_if̂_i(ω)f̂_i(ω')
for all ω,ω'∈Ω .
The functions f_i:=f̂_i∘ρ then satisfy
κ(x,y) = κ̂(ρ(x),ρ(y)) = _ic_if̂_i(ρ(x))f̂_i(ρ(y))
= _ic_if_i(x)f_i(y) .
Since the map f̂↦f̂∘ρ preserves the scalar product by
<ref>, the sequence (√(c_i)f_i) is an ONB for ℍ.
It remains to verify the representation
ℍ = f=_i∈ℕa_i√(c_i)f_i |
a_1,a_2,…∈ℝ with _i|a_i|^2<∞ .
Since Mercer's theorem applies to κ̂, the analogous representationℍ̂ = f̂=_i∈ℕa_i√(c_i)f̂_i |
a_1,a_2,…∈ℝ with _i|a_i|^2<∞
holds on Ω, by <cit.>.
As f̂↦f̂∘ρ is an isometric isomorphism by <ref>,
that yields the representation for ℍ above.
§.§ Gaussian processes
That F is continuous and -invariant almost surely follows immediately from
<ref>. Let Π̃ be a transversal. Our task is to show that the restriction
F|_Π̃ is a continuous Gaussian process on Π̃. To this end,
suppose h is a continuous function on ℝ^N.
Then h∘ρ is continuous by <ref>,
and the restriction is again continuous. That means
τ:h↦ (h∘ρ)|_Π̃ is a map (Ω)→(Π̃) .
Since both composition with a fixed function and restriction to a subset
are linear as operations on functions, τ is linear, and since neither composition nor restriction can increase
the sup norm, it is bounded. The restriction
F|_Π̃ = τ(H)
is hence the image of a Gaussian process with values in the separable Banach space
(Ω) under a bounded linear map into the Banach space (Π̃). That implies
it is a Gaussian process with values in (Π̃), and that
κ and μ transform accordingly
<cit.>.
get arXiv to do 4 passes: Label(s) may have changed. Rerun
|
http://arxiv.org/abs/2306.06202v2
|
20230609191016
|
NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics
|
[
"Anwar Said",
"Roza G. Bayrak",
"Tyler Derr",
"Mudassir Shabbir",
"Daniel Moyer",
"Catie Chang",
"Xenofon Koutsoukos"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"q-bio.NC"
] |
Spectrahedral Geometry of Graph Sparsifiers
[
July 31, 2023
===========================================
Machine learning provides a valuable tool for analyzing high-dimensional functional neuroimaging data, and is proving effective in predicting various neurological conditions, psychiatric disorders, and cognitive patterns. In functional Magnetic Resonance Imaging (MRI) research, interactions between brain regions are commonly modeled using graph-based representations. The potency of graph machine learning methods has been established across myriad domains, marking a transformative step in data interpretation and predictive modeling. Yet, despite their promise, the transposition of these techniques to the neuroimaging domain remains surprisingly under-explored due to the expansive preprocessing pipeline and large parameter search space for graph-based datasets construction. In this paper, we introduce NeuroGraph, a collection of graph-based neuroimaging datasets that span multiple categories of behavioral and cognitive traits. We delve deeply into the dataset generation search space by crafting 35 datasets within both static and dynamic contexts, running in excess of 15 baseline methods for benchmarking. Additionally, we provide generic frameworks for learning on dynamic as well as static graphs. Our extensive experiments lead to several key observations. Notably, using correlation vectors as node features, incorporating larger number of regions of interest, and employing sparser graphs lead to improved performance. To foster further advancements in graph-based data driven Neuroimaging, we offer a comprehensive open source Python package that includes the datasets, baseline implementations, model training, and standard evaluation. The package is publicly accessible at <https://anwar-said.github.io/anwarsaid/neurograph.html>.
§ INTRODUCTION
Graph Neural Networks (GNNs) have demonstrated remarkable efficacy in a variety of domains including recommendations, forecasting and the analysis of functional Magnetic Resonance Imaging (fMRI) data <cit.>. In human neuroimaging research, GNNs have proven valuable in capturing the complex connectivity patterns within the brain's functional networks <cit.>. By examining the spontaneous and synchronized fluctuations of the magnetic resonance signals, fMRI provides a useful means of measuring functional network connectivity <cit.>.
Neuroimaging and Graph Machine Learning (GML) are two rapidly evolving fields with immense potential for mutual collaboration. However, a significant challenge lies in bridging the gap between these domains and enabling seamless integration of neuroimaging data into state-of-the-art GML approaches <cit.>. This gap is primarily attributed to the expansive fMRI data preprocessing pipeline, the absence of interface for creating articulate graph representation datasets, and a limited understanding of the practical applications of graph machine learning to neuroimaging <cit.>. To address these challenges, the principal objectives of this study include a careful exploration of the graph-based dataset generation, with the goal of formulating a strategic road map for transitioning from fMRI data to a graph-based representation paradigm. Secondly, we conduct a rigorous evaluation of graph machine learning methodologies, with a special emphasis on GNNs, examining their efficacy when applied to diverse fMRI data configurations.
The human brain, a complex network of interconnected regions, can be represented as a graph, wherein nodes correspond to contiguous segments known as Regions of Interest (ROIs), and edges represent their relationships <cit.>. Features of the functional connectome, such as correlations between the BOLD (Blood Oxygen Level Dependent) signals between different brain regions, typically employed for downstream machine learning tasks <cit.>, can be re-envisioned as node features within attributed graph representations. These representations pave the way for a rich assortment of graph-based data representations, wherein GNNs are exceptionally well-suited <cit.>. Yet, the vast potential offered by the intersection of fMRI datasets and GNNs remains untapped, due primarily to the expansive search space for data generation and the multifaceted nature of hyperparameters. In this study, we pioneer a rigorous exploration and benchmarking for GNNs, with the following primary contributions:
* We introduce NeuroGraph, a collection of static and dynamic brain connectome datasets tailored for benchmarking GNNs in classification and regression tasks including gender and age classification, mental state decoding, and prediction of fluid intelligence and working memory scores. This enables an extensive exploration of brain connectivity and its associations with various cognitive, behavioral, and demographic variables. Details of the proposed datasets are provided in Table <ref>.
* We perform an extensive exploratory study in search of optimal graph-based data representations for Neuroimaging data, implementing 15 baseline models on 35 different datasets. Additionally, we provide detailed benchmarking for the datasets we propose.
By offering NeuroGraph, we create an essential road map between the neuroimaging and graph machine learning communities. Researchers in the neuroimaging field can now tap into the power of cutting-edge GNNs. Our datasets generation pipeline serves as a road map, guiding researchers on how to effectively transform neuroimaging data into a standard graph representation suitable for graph machine learning. This integration facilitates the adoption of state-of-the-art graph-based techniques, unlocking new insights and accelerating discoveries in the field of Neuroimaging.
§ RELATED WORK
While functional brain connectomes have long been recognized as a rich source of information in neuroscience and neuroinformatics <cit.>, their value has become increasingly evident in recent years <cit.>. Propelled by growth in data availability and methodological breakthroughs, ML has shown remarkable efficacy on tasks such as prediction of cognitive function <cit.>, identification of mental health disorders <cit.> , and understanding of brain aging <cit.>. However, these methods utilize the functional connectivity of the matrix while ignoring the relational information among the brain regions which could potentially aid to the modelling process.
GNNS for static graphs: GNNs have significantly evolved as a major field of exploration, offering an intuitive approach in learning from graph-structured data <cit.>. In a static setting, where individual data points are represented by single graphs, a variety of methods have been introduced <cit.>. Recent studies have demonstrated the effectiveness of various approaches when applied to functional connectome data, which can be represented as different types of graphs, including weighted graphs <cit.>, and attributed graphs <cit.>, among others. By leveraging the structured and relational nature of the data, GNNs not only enable learning from the functional connectivity matrix but also enhance the overall capabilities of the models <cit.>.
Dynamic graph representations: The field of learning dynamic graph representations in a graph classification setting remains relatively unexplored, especially in the realm of brain imaging <cit.>. In neuroimaging, dynamic graphs are constructed to capture the time-varying interactions and connectivity patterns in the brain <cit.>. Despite this relative lack of exploration, recent years have witnessed the emergence several methods that have demonstrated remarkable results when applied to brain graphs <cit.>. These methods have showcased the potential of effectively capturing and analyzing the dynamic nature of brain connectivity, opening up new avenues for advancements in our understanding of brain function and neurological processes.
§ NEUROGRAPH
A few recent efforts have been made to utilize GNNs for predictive modeling on Neuroimaging data. However, there is no consensus on the preprocessing pipeline and hyper-parameter configuration for deriving expressive graph-based brain datasets <cit.>. In addition, although there are a multitude of GNNs models, no benchmark datasets have been created to evaluate GML approaches on brain connectome data. To fill this gap and provide a common ground, we use publicly available datasets and only minimally preprocess the data using standard fMRI preprocessing steps.
§.§ From fMRI to Graph Representations
fMRI data is typically represented in four dimensions, where the blood-oxygen level-dependent (BOLD) signal is captured over time in a series of 3-dimensional volumes. These volumes display the intensity of the BOLD signal for different spatial locations in the brain. However, since brain activity tends to exhibit strong spatial correlations, the BOLD signal is often summarized into a collection of special functional units, brain parcels. These units represent regions of interest (ROIs) whose constituent “voxels“ (a smallest three dimensional resolution) exhibit temporally correlated activity.
The Human Connectome Project (HCP) <cit.> is a publicly available rich neuroimaging dataset containing not only imaging data but also a battery of behavioral and cognitive data. We select this dataset for benchmarking and utilize the established group level Schaefer <cit.> atlases to represent the measured BOLD signal. These atlases provide a parcellation of the cerebral cortex into hierarchically organized regions at multiple granularities (resolutions).
We use resting-state and seven task fMRI paradigms from the HCP 1200 dataset. All fMRI scans underwent the HCP minimal preprocessing pipeline <cit.>. We further regressed out six rigid-body head motion parameters and their derivatives, as well as the low-order trends, from the minimally preprocessed data. The mean fMRI time series was extracted from all voxels within each ROI for different parcellation schemes. Individual (subject-wise) ROI time-series signals were temporally normalized to zero mean and unit variance.
Our study of these datasets encompasses two distinct modes of analysis: static and dynamic graph construction. We apply different GNNs to both types and perform benchmarking in five unique tasks. In the static graph construction, we investigate multiple parameters to build the graphs from the raw data, taking into consideration variations in node features, the number of nodes or regions of interest (ROIs), and the density of the graph. For node features, we take into account correlations, time-series signals, or a blend of both. For the number of nodes provided by <cit.> (i.e., ROIs), we examine three different resolutions: 100, 400, and 1000 nodes. As for graph density, we consider sparse, medium, and dense configurations. For the sparse setup, we choose the top 5% of values from the correlation matrix for edge selection, whereas for the medium and dense setups, we select the top 10% and 20% of values, respectively. We note that there are numerous methods for constructing brain graphs; however, we've opted for those more likely to yield superior performance <cit.>. Additional details about the complexity of the search space in dataset construction and the rationale behind these parameters are presented in the supplementary material. We test 10 GNN methods to find the suitable combination of parameters and use a total of 15 baseline methods for benchmarking. Using the optimal combination of parameters in the static setting, we generate benchmark datasets for corresponding tasks in the dynamic setting. In the subsequent sections, we first describe the generation of graph-based datasets, followed by the description of each task.
§.§ Graph Representation
The static graph representation encompasses the conventional methodology of generating a static functional connectome graph from an fMRI scan, see supplementary materials for additional details. We define a connectome graph as G = (, , X), wherein the node set = {v_1, v_2, …, v_n} represents ROIs, while the edge set ⊆× represents positive correlations between pairs of ROIs, determined via a defined threshold. The feature matrix is denoted by X^n × d, where n signifies the total number of ROIs and d refers to the feature vector's dimension. Subsequently, we define a representation vector h_G for the graph G, obtained via a GNN with an objective to perform the desired downstream machine learning task.
fMRI data comprise numerous timepoints within a scan, permitting the construction of dynamic graphs and thereby emphasizing the temporal information encapsulated within the data. This strategy has been evidenced to be notably effective within the literature <cit.>. Within the dynamic context, we define a sequence of brain graphs over T timepoints, denoted as = {G_1, G_2,… G_T}, wherein each graph G_t captured at index t to t+Γ from the fMRI scan. Here, Γ signifies the window length, set to 50 with a stride of 3 in our experiments. This setup allows us to capture functional connectivity within 36 seconds every 2.16 seconds, adhering to the standard protocol for sliding-window analyses as outlined in <cit.>. To alleviate computational load and memory during training, we followed the approach from <cit.>, and randomly sliced the time dimension of the ROI-timeseries matrix at each step, maintaining a fixed length 150 and use 100 ROIs for the dynamic datasets. The procedure for constructing a graph for each timepoint parallels the one applied to the static graph. Subsequently, can be utilized to procure a dynamic graph representation h_dyn to execute the desired downstream ML job. We refer the reader to supplementary material for further details.
§.§ Benchmark Datasets
The datasets are primarily divided into three main categories: those constructed for classification of demographics and brain states, and those constructed for predicting cognitive traits. Each category encapsulates distinct aspects of the collected data and serves unique analytical purposes. Detailed discussions of these categories will be provided in the subsequent sections with some basic statistics presented in Table <ref>. For more detail, readers are referred to the supplementary materials.
Predicting Demographics:
The category of demographic estimation in our dataset is comprised of gender, and age estimation <cit.>. The gender attribute facilitates a binary classification with the categories being male and female. Age is categorized into three distinct groups as in <cit.>: 22-25, 26-30, and 31-35 years. A fourth category for ages 36 and above was eliminated as it contained only 14 subjects (0.09%), to maintain a reasonably balanced dataset. We introduce four datasets named: HCP-Gender, HCP-Age, DynHCP-Gender, and DynHCP-Age under this category. The first two are static graph datasets while the last two are the corresponding dynamic graph datasets.
Predicting Mental States: The mental state decoding involves seven tasks: Emotion Processing, Gambling, Language, Motor, Relational Processing, Social Cognition, and Working Memory. Each task is designed to help delineate a core set of functions relevant to different facets of the relation between human brain, cognition and behavior <cit.>. Under this category, we present two datasets: HCP-Activity, a static representation, and DynHCP-Activity, its dynamic counterpart.
Estimating Cognitive Traits:
The cognitive traits category of our dataset comprises two significant traits: working memory (List Sorting) <cit.> and fluid intelligence evaluation with PMAT24 <cit.>. Working memory refers to an individual's capacity to temporarily hold and manipulate information, a crucial aspect that influences higher cognitive functions such as reasoning, comprehension, and learning <cit.>. Fluid intelligence represents the ability to solve novel problems, independent of any knowledge from the past. It demonstrates the capacity to analyze complex relationships, identify patterns, and derive solutions in dynamic situations <cit.>. The prediction of both these traits, quantified as continuous variables in our dataset, are treated as regression problem. We aim to predict the performance or scores related to these cognitive traits based on the functional connectome graphs. We generate four datasets under cognitive traits: HCP Fluid Intelligence (HCP-FI), HCP Working Memory (HCP-WM), DynHCP-FI and DynHCP-WM.
§.§ Learning models
The functional connectome, which effectively captures the network structure of brain activity, has proven to be a valuable representation of fMRI data for machine learning, as demonstrated in numerous previous studies and our own experiments <cit.>. Recognizing its significance in the learning process, we sought a suitable GNN framework that could effectively leverage the comprehensive functional connectome data through a combination of message passing and neural network. After thorough exploration, we implemented a GNN architecture, denoted as GNN^* illustrated in Figure <ref> (b), that incorporates residual connections and concatenates hidden representations obtained from message passing at each layer. To further enhance the model's performance, we employ batch normalization and a multi-layer perceptron (MLP) to effectively utilize the combined representations during training. While adaptive residual connections have been extensively explored in GNNs, we present this simple and unique architecture for brain graphs that effectively learns the representations for brain graphs <cit.>.
Recently, a number of dynamic graph representation approaches in conjunction with recurrent neural networks (RNNs) such as GRU, LSTM, and transformers, have been introduced <cit.>. However, assessing the effectiveness of GNN models in a unified dynamic setting using the existing approaches presents a significant challenge. Therefore, we implement a simple and generalized architecture tailored to process dynamic graphs for the graph classification problem, as illustrated in Figure <ref> (a). Our architecture comprises two distinct modules. The first is a GNN-based learning module, responsible for deriving graph-level representations from each of the derived graph snapshot. Following this, a transformer module takes over, applying attention to the learned representations from the GNNs. Finally, the outputs are averaged into a single dynamic graph representation vector, h_dyn. This design offers a universally applicable method for evaluating multiple GNN methods within a dynamic graph setting for the downstream ML classification and regression problem.
§ BENCHMARKING SETUP
In order to thoroughly evaluate the performance of brain graphs generated through different hyperparameters, we propose a series of research questions. These questions seek to identify the optimal setting for our graph-based neuroimaging analysis and ultimately enhance the performance of the predictive models derived from it.
What are the optimal node feature configurations?
The first question aims to identify the best configurations for node features. This involves an exploration and comparison of various feature representations to discern their effectiveness on the performance of the derived predictive models. In assessing node feature configurations, our analysis encompasses the correlation matrix, the time-series BOLD signals, as well as their combination. The correlation matrix is generated by calculating the correlation values amongst all ROIs. On the other hand, the BOLD signals are derived post the preprocessing of the input fMRI image, adhering to the preprocessing pipeline outlined in Section <ref>.
To what extent does the number of ROIs impact the performance of predictive modeling on graphs?
The second question delves into the influence of varying the number of ROIs on the performance of predictive modeling. The objective is to assess how the granularity of ROIs affects the quality and the performance of the predictive models. We evaluate 100, 400 and 1000 number of ROIs.
To what degree does sparsifying brain functional connectome graphs impact the performance of predictive modeling? What threshold yields optimal performance?
Our third question investigates the impact of sparsifying brain functional connectome graphs on the performance of the predictive models. It aims to establish an optimal threshold that leads to optimal model performance in graph machine learning setting. In our exploration, we consider the top 20%, 10%, and 5% percentile values from the correlation matrices to construct the graph edges.
Which graph convolution approaches are preferable for the predictive modeling on brain graphs?
Our fourth and final question delves into the exploration of various graph convolution methods, assessing their suitability for predictive modeling on brain graphs. The aim here is not only to identify, but also to recommend the most effective techniques, considering the specific features and intricacies of neuroimaging data. In this endeavor, we have put to test over 12 GNNs, which include two of our own implemented frameworks, to gauge their comparative performance.
By addressing these questions, we aim to set a robust benchmarking framework for graph-based machine learning methods in neuroimaging, providing invaluable insights into their optimal application.
§ BENCHMARKING RESULTS
In this section, we introduce the baseline models, describe our experimental setup, and present the results from our preliminary exploration study. Following this, we lay out our approach to benchmarking and showcase the performance of various baseline methods on each dataset.
§.§ Baselines and Experimental Setup
This section outlines the specifics of our unique, generalized experimental setup designed to evaluate a range of GNN models. We consider 10 well-established GNN models: k-GNN <cit.>, GCN <cit.>, GraphSAGE <cit.>, Unified Message Passing denoted as UniMP <cit.>, Residual GCN (ResGCN) <cit.>, Graph Isomorphism Network (GIN), Chebyshev Convolution <cit.>, Graph Attention Network (GAT) <cit.>, Simplified GCN (SGC) <cit.>, and General Convolution (General) <cit.>[We use PyG implementations and default settings for running all these models.]. We also consider 3-layered Neural Network (NN), two dimensional Convolutional Neural Network (CNN) and Random Forest for the comparison.
In our experimental setup, we devise a graph classification architecture comprising three layers of GNNs, followed by a sort pooling aggregator <cit.>. Sort pooling sorts the node features based on the last channel, selecting only the first k representations. Subsequently, sort pooling is advanced through two one-dimensional convolution layers, which are then succeeded by a two-layer Multi-Layer Perceptron (MLP). This architecture has been consistently utilized across all GNNs throughout the entire experimental setup. For the dynamic datasets, we utilize our baseline method with five different GNNs. For NN, we utilized 512, 256, and 128 hidden units in each layer, respectively. For the CNN, we utilized a four-layer model with a stride of 2, 64 kernels of size 5, and padding set to 2. This was complemented by three fully connected layers <cit.>. For the Random Forest (RF) <cit.>, we opted for 100 estimators, leaving the remaining parameters at their Scikit-learn defaults. All of our experiments were carried out on a system equipped with an Intel(R) Xeon(R) Gold 6238R CPU operating at 2.20GHz with 112 cores, 512 GB of RAM, and an NVIDIA A40 GPU with 48GB of memory.
Models training: We have carefully carried out the training and evaluation of each dataset in our study. Each dataset was partitioned randomly with 70% training, 20% testing, and 10% for validation. To ensure reproducibility and balance across the datasets, we employed a fixed seed, 123, for the split in a stratified setting. This stratified approach facilitated an equitable distribution of classes in each partition. Each model underwent training for 100 epochs with a learning rate of 1e^-5 for classification, and for 50 epochs with a learning rate of 1e^-3 for regression problem. Across all experiments, we set dropout to 0.5, weight decay to 5e^-4, and designated 64 hidden dimensions for both the GNN convolution and MLP layers. Furthermore, for loss functions, we utilized cross entropy for classification and mean absolute error for regression problems.
§.§ Exploratory Experiments and Results
Here we address the research questions outlined earlier by conducting a series of experiments including the evaluation of different node feature configurations, the influence of varying numbers of ROIs, the implications of sparsity in brain graphs, and the effectiveness of diverse graph convolution approaches. Each experiment aligns with a research question, thereby paving the way for comprehensive analysis and definitive conclusions.
Performance enhancement with correlations as node features:
Our first step involves evaluating the interplay between the number of ROIs and the configuration of node features, with an aim to streamline the overall search space. For this purpose, we engage in the gender classification problem using 10 different GNNs. The results of these experiments are presented in Table <ref>. It is clear that employing correlations as node features consistently enhances the performance across all evaluated numbers of ROIs. However, what caught our attention was the significant variance in the results obtained through correlations and BOLD signals and the number of ROIs. The performance notably declines when correlations and BOLD signals are combined, and the number of ROIs are reduced. This motivates further investigation on how to leverage BOLD signal or perhaps obtain features from the BOLD signals to be used for learning. Furthermore, the performance of different GNNs baselines does not consistently correlate with the number of ROIs or node features.
Performance enhancement through large ROIs and sparse brain graphs:
Our analysis extended to evaluating the efficacy of 10 GNNs on gender classification, using a varying number of ROIs and different graph densities. In addition to gender classification, we further incorporated an activity classification problem to strengthen our observations under different settings. For all the experiments, we opted for correlations as node features, a decision driven by the consistent boost they offer in performance from the last experiment. The results are presented in Table <ref>. An important observation from our findings reveals that larger numbers of ROIs, (1000) demonstrate superior performance in gender classification. Similarly, a significant number of GNNs exhibit improved results with the use of 1000 ROIs for the activity classification problem. An analysis of the graph densities reveals an intriguing trend. We found that most GNNs achieved superior results when deployed on sparse graphs. Therefore, we deduce that the combination of large ROIs, sparse graphs, and correlation features contribute significantly to enhancing the performance of GNNs.
§.§ Benchmarking with Optimal Settings
Considering the optimal setting obtained through exploring search space presented in the previous section, here we present the experimental setup and benchmarking results on the proposed 10 datasets.
The classification accuracy of all baseline models is detailed in Table <ref>. It is evident from the results that the GNN^* stands out as the leading performer. However, the Neural Network's performance is also notably impressive. Similarly, the results pertaining to the regression problems have been outlined in Table <ref>. The leading performer on the regression problems is again GNN^*.
In Table <ref>, we lay out the classification and regression results obtained on the dynamic datasets. Given the consideration of a basic dynamic baseline and the construction of dynamic datasets using limited dynamic lengths and number of ROIs, the performance does not quite match up to the static datasets. Nonetheless, it's worth noting that UniMP, despite the constraints, consistently demonstrates respectable performance.
§ CONCLUSION
In this work, we introduce novel brain connectome benchmark datasets specifically tailored for graph machine learning, representing a promising avenue for addressing various challenges in neuroimaging. The inherent symmetries and complex higher-level patterns found in brain graphs make them well-suited for graph machine learning techniques. To advance this vision, we present NeuroGraph, a comprehensive suite encompassing benchmark datasets and computational tools.
In our comprehensive exploratory study encompassing 35 datasets, we conducted a thorough analysis by running multiple machine learning models. Our key observations are as follows: Firstly, utilizing correlation as node features shows promising potential for enhancing models' performance. Secondly, we observed that increasing the number of ROIs or employing large-scale brain graphs leads to improved performance compared to datasets with fewer ROIs. Thirdly, we demonstrated that employing sparser graph setting resulted in enhanced models' performance. Through a range of experiments across various learning objectives, we further highlight that GNNs exhibit superior performance compared to traditional NNs and 2D CNNs. These findings underscore the significant potential of GNNs in achieving improved performance across diverse tasks and underscore their suitability for graph-based Neuroimaging data analysis.
Based on these insightful observations, we have developed NeuroGraph, a meticulously curated and comprehensive benchmark dataset specifically designed for graph-based neuroimaging. Additionally, we provide computational tools to explore the design space of graph representation coming from Neuroimaging data, to facilitate the transformation of fMRI data into graph representations and showcase the potential of GNNs in this context. NeuroGraph serves as a valuable resource, offering a road map for researchers interested in leveraging graph-based approaches for fMRI analysis and demonstrating the effective utilization of GNNs in this domain.
plain
§ BENCHMARKS AVAILABILITY AND LICENSING
The fMRI data utilized in this research was sourced from the Human Connectome Project <cit.>. The proposed graph-based benchmark datasets can be accessed for download at <https://anwar-said.github.io/anwarsaid/neurograph.html>. These datasets are provided in PyG[<https://pyg.org/>] format, optimized for use with Graph Neural Networks (GNNs). However, they can also be conveniently incorporated into other platforms. Additionally, the associated code for downloading, preprocessing, and benchmarking is open to the public at <https://github.com/Anwar-Said/NeuroGraph>, complete with comprehensive documentation.
Acknowledgement: “Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.”
§ NEUROGRAPH AND NEUROIMAGING DATA
Neuroimaging, a powerful field of study, enables researchers to delve into the complexities of the human brain by capturing detailed images and measurements. Recent advancements in technology have resulted in an abundance of neuroimaging data, particularly functional magnetic resonance imaging (fMRI), which offers invaluable insights into brain activity. However, understanding and analyzing fMRI data pose several challenges. Firstly, the high dimensionality of fMRI data presents a significant hurdle. Additionally, inherent noise and variability in fMRI signals can obscure underlying neural activity. Complex spatial and temporal dependencies further complicate fMRI data analysis, demanding advanced modeling techniques. Furthermore, the interpretation and analysis of fMRI data can be time-consuming and subjective. The graphical representation of fMRI data offers a plethora of opportunities to tackle these challenges. For instance, network science
and graph theoretical approaches provide a diverse range of tools to explore brain regions and their connectivity patterns <cit.>. Furthermore, the application of graph machine learning techniques, such as GNNs are particularly well-suited for analyzing neuroimaging data and have the potential to provide valuable insights. The provision of graph-based neuroimaging benchmarks and computational tools play a crucial role to enhance the field, which is the main focus of this study.
§.§ fMRI Data Sources
Several initiatives have been undertaken in the past decade to assemble comprehensive fMRI datasets. One notable source is the Human Connectome Project (HCP) dataset <cit.>. The HCP dataset offers an extensive collection of multimodal neuroimaging data, including resting-state fMRI, task-based fMRI, and structural MRI scans, from a large cohort of healthy individuals. In addition to large neuroimaging datasets curated by institutions or projects, some notable resources are OpenNeuro, OpenfMRI and fcon_1000[<http://fcon_1000.projects.nitrc.org/>] platforms, which host a diverse range of publicly available fMRI datasets contributed by researchers worldwide <cit.>. These datasets cover various experimental paradigms, clinical populations, and research domains, providing researchers with a wealth of data for analysis and investigation.
We have chosen to utilize the HCP S1200 dataset from the Brain Connectome as a primary resource for our graph-based benchmarking <cit.>. This dataset is well-suited for graph-based benchmarking due to its extensive coverage of brain regions and their interconnections. Additionally, the HCP S1200 dataset provides valuable demographic and behavioral information, enabling comprehensive analyses that consider various factors influencing brain connectivity. Its wide availability and standardized processing pipelines further contribute to its suitability for graph-based benchmarking, ensuring consistency and comparability across studies. Thus, the HCP S1200 dataset from the Brain Connectome represents a robust choice for conducting graph-based benchmarking studies in the field of neuroimaging.
§.§ Reading HCP Dataset
Storing and reading fMRI datasets presents a formidable challenge due to their substantial storage requirements, necessitating significant disk space allocation, e.g., each subject of HCP S1200 requires 1.1 GB of space on disk. Moreover, the preprocessing of fMRI data calls for tools that are not only user-friendly but also highly efficient. Fortunately, the Human Connectome Database (HCP) offers an AWS instance (s3 bucket) that allows for seamless data crawling. NeuroGraph, with its implementation utilizing the boto3 Python package, provides an efficient solution for crawling the dataset. Boto3, a widely used Python package, enables seamless interaction with AWS services, facilitating efficient data retrieval and preprocessing in the NeuroGraph framework. Our implementation offers users the flexibility to either store the datasets or preprocess them on the fly if storage space is limited (see Table <ref> for disk storage). To access the HCP data, users are required to obtain credentials from HCP[<https://db.humanconnectome.org>] and provide them to NeuroGraph. Moreover, NeuroGraph also provides a Python class for preprocessing data from the local storage.
§.§ Data Preprocessing
In close collaboration with domain experts from both the neuroimaging and graph machine learning fields, NeuroGraph's preprocessing pipeline is divided into six stages. These stages ensure the quality and reliability of the fMRI data. Initially, we utilize data that has already been processed using the HCP minimal processing pipeline <cit.>.
* Step 1 - Brain Parcellation: The first phase of our pipeline involves brain parcellation, a process that divides the brain into smaller regions or parcels. This step allows for the analysis of functional connectivity within and between these parcels. In our study,we employ the Schaefer atlases <cit.>, widely used brain parcellation schemes that define neurobiologically meaningful features of brain organization. These atlases provide a parcellation of the cerebral cortex into hierarchically organized regions at multiple resolutions.
Using the population level atlases, we extract the mean fMRI timeseries for each region of interest (ROI). This provides a representative measure of the average neural activity within each specific brain region, enabling subsequent connectivity analyses.
* Step 2 - Remove Scanner Drifts: Next, we remove linear and quadratic trends. This step aims to remove the scanner drifts in the fMRI signals that arise from instrumental factors. By eliminating these trends, we enhance the signal-to-noise ratio and increase the sensitivity to neural activity.
* Step 3 - Remove Motion Artifacts: To further improve data quality, we apply regression techniques to mitigate the effects of motion artifacts. Specifically, we regress out six rigid-body head motion parameters, along with their derivatives, from the fMRI data. These parameters capture the movement and rotation of the subject's head during the scanning session, ensuring that any potential confounding effects are minimized.
* Step 4 - Subject-Level Signal Normalization: We perform subject-level normalization of the ROI timeseries signals. More specifically, we temporally normalize all signals from a subject to zero mean and unit variance. This step allows for fair comparisons and facilitates the identification of meaningful variations in the functional connectivity patterns across subjects.
* Step 5 - Calculate Correlation Matrix: We compute the correlation matrices from the ROI timeseries signals. Correlation matrices capture the strength of functional connectivity between different ROIs. By calculating pairwise correlations between the timeseries signals of each ROI, we obtain a matrix that represents the interregional functional connections within the brain. This step allows us to quantify and analyze the patterns of functional connectivity across the entire brain, and construct a graph. The correlation matrices serve as a valuable tool for investigating the network-level organization of the brain and identifying regions that exhibit synchronous activity <cit.>. These matrices provide a representation of the functional architecture and can be further utilized for graph-based analyses, such as network characterization and identification of key brain hubs <cit.>. In Figure <ref> and <ref>, we provide the visualizations of BOLD signals and their corresponding graphs for one subject in certain conditions.
* Step 6 - Construct Static/Dynamic Attributed Graphs: Finally, we compute two types of graph-based datasets from the functional connectivity matrix: static and dynamic graphs. As discussed in Section 3 of the paper, the static graph is defined as G = (, , X). Here, the node set = {v_1, v_2, …, v_n} represents ROIs, while the edge set ⊆× denotes positive correlations between pairs of ROIs, as determined by a predefined threshold. The feature matrix is represented by X^n × d, where n symbolizes the total number of ROIs, and d corresponds to the dimension of the feature vector. We explore the dataset generation search space by considering different numbers of ROIs, different thresholds, and node features to identify optimal parameters. The next section provides a comprehensive overview of the dataset construction search space.
Regarding the parameter setup for constructing our benchmark datasets, we opt for a sparse setup (top 5%) with 1000 ROIs for the HCP-Gender, HCP-Age, HCP-WM, and HCP-FI datasets. However, for the HCP-Activity dataset, we reduce the number of ROIs to 400 in order to manage memory overhead. In the dynamic setting, we employ a sliding window approach with a fixed window length (Γ) set to 50 and a stride of 3. Considering memory constraints and computational overhead, we fix the dynamic length (l) to 150 and slide over the preprocessed timeseries matrix to construct dynamic graphs. For all dynamic graphs, we consider 100 ROIs and medium sparsity (top 10%). With this setting, the total number of dynamic graphs we obtain for each subject is ((l-Γ)/stride)+1.
§.§ The Design Space is Huge
The design space for constructing graphs from correlation matrices is substantial, given the multitude of available methods. We can construct diverse graph types employing various strategies. For instance, some of the potential graph types to consider include simple undirected graphs as demonstrated in <cit.>, weighted graphs<cit.>, attributed graphs <cit.>, and minimum spanning trees <cit.>, among others. Similarly, a range of parameters comes into play during this process, further expanding the design space for these constructions. These parameters include the number of ROIs, edge weights, density thresholding for edge selection, and node features, to name a few.
GNNs have shown considerable promise in handling attributed graphs, demonstrating their effectiveness in various domains <cit.>. Attributed graphs, which include not only the graph topology but also node-level features, represent complex systems more accurately than simple graph. GNNs leverage these attributes to capture both local and global structural information, allowing for the development of more comprehensive graph representations. Considering the importance of attributed graphs, we opted to construct rich, brain attributed graphs.
Node features: Traditional methods for representing node features in graphs include using coordinates <cit.>, one-hot encoding <cit.>, and mean activation <cit.>. Coordinates serve to provide spatial information about the nodes, while one-hot encoding are used for categorical features, effectively distinguishing different node types. Mean activation, on the other hand, can give insights about the average level of a node's activity or influence. While these methods provide a base level of information, they may not fully capture the rich complexity inherent in many data structures, such as brain graphs. To address this, we explore more powerful ways of representing node features, including using correlation vectors, BOLD signals and the combination of both. Correlation vectors can encapsulate the relationship between different nodes, providing insight into the connectivity and interaction within the graph. BOLD signals, give information about changes in blood flow in the brain, which can be an indicator of neural activity. By combining both of them, we may enrich models with a wealth of information, thereby capturing the intricate details and relationships present in brain graphs.
Number of ROIs: ROIs in brain graph construction may significantly impacts the granularity and overall scope of the resulting graph. Using a smaller number of ROIs, such as 100, can lead to a more generalized and coarser view of brain connectivity. This simplified perspective can be useful for broad overviews and initial exploration but might overlook intricate local interactions or specific clusters of activity. Conversely, using a larger number of ROIs, such as 400 or 1000, allows for a more detailed and finer representation of the brain's connectivity. With more ROIs, the graph can capture more specific interconnections, potentially revealing sub-networks or localized activity patterns that a coarser graph might miss. However, larger graphs also present a challenge in terms of computational load and complexity, also prone to noise. Interestingly, different methods in the literature have adopted different numbers of ROIs for their analysis <cit.>. These varying approaches underscore the fact that the choice of ROIs number is not merely a matter of computational convenience, but can significantly influence the outcomes of the study.
In light of this, our research aims to explore these three ROIs sizes: 100, 400, and 1000. Our goal is to understand the impact of different graph granularity levels on the performance of GNNs. By doing so, we hope to provide deeper insights into how different levels of detail in the graph structure affect the GNN's ability to capture and model brain connectivity. This investigation could potentially guide the selection of an optimal ROI size in future brain graph studies, striking a balance between capturing sufficient detail and maintaining computational feasibility.
Density thresholding: Graph density is a fundamental property that may impacts the performance of GNNs. Graph density refers to the proportion of the possible connections in a graph that are actual connections. It influences how information is propagated through the network, may potentially affect the accuracy and efficiency of the GNN. A sparse graph (low-density) might lead to information underflow, with some nodes being poorly connected, which might cause inadequate learning of node representations. On the other hand, a high-density graph
could lead to an information overflow, with a significant amount of information being propagated, possibly causing noise and overfitting <cit.>.
Thresholding, on the other hand, is a crucial step in the construction of brain graphs. It's used to determine which correlations are strong enough to be included as edges in the graph. There are several approaches to thresholding. One is absolute thresholding, where a fixed threshold value is selected, and all correlations in the matrix above this threshold are included as edges in the graph. However, the choice of an absolute threshold can be somewhat arbitrary, and may result in graphs of varying sizes and densities. This variability can complicate comparisons between graphs <cit.>. Proportional thresholding is another method, in which the strongest x% of correlations are included as edges in the graph. This method ensures that all resultant graphs have the same density of edges, which facilitates comparisons between them. However, it can also result in the inclusion of weak, potentially non-significant correlations in the graph. To avoid this issue, some studies consider only positive correlations, which allows the construction of graphs with various densities and avoids the complications of negative thresholding <cit.>.
Indeed, there are numerous ways to conduct thresholding in brain graph construction, with several options available within each thresholding approach. Each method and option presents its unique set of advantages and potential limitations. In this context, we focus on proportional thresholding with positive correlations, an approach that has shown encouraging results in previous research <cit.>. Specifically, we explore three levels of density: those defined by the top 5%, 10%, and 20% percentile values from the correlation matrices. These densities represent different levels of graph sparsity, offering a broad perspective on how the choice of threshold can impact the topology and interpretability of the resulting brain networks. We note that the terms “sparse” (5%) and “dense” (20%) are relative and dependent on the context of feasible ranges. Despite their different percentages of edges, both sparse and dense graphs exhibit a complexity of O(n^2) edges. We observed that even in sparse datasets, the average degree is around 50 for 1000 ROIs, indicating a substantial level of connectivity.
§ NEUROGRAPH BENCHMARK DATASETS
We propose a collection of ten datasets tailored to five distinct tasks, encompassing both static and dynamic contexts. These tasks are identified as HCP-Activity, DynHCP-Activity, HCP-Gender, DynHCP-Gender, HCP-Age, DynHCP-Age, HCP-WM, DynHCP-WM, HCP-FI, and DynHCP-FI. These datasets are derived from the HCP S1200 dataset, following a sequence of preprocessing operations. For the creation of static datasets, we eliminated two subjects that contained fewer than 1200 scans and then applied the preprocessing as outlined in the previous sections. The resulting datasets are represented as sparse matrices with 1000 ROIs. However, we've tailored the Activity dataset to include only 400 ROIs owing to its larger size of over 7000 scans, as this adaptation was necessary to overcome memory constraints. As for the dynamic datasets, we've standardized the dynamic length to 150, with a window size of 50 and a stride of 3. Moreover, to alleviate the substantial memory demands, we've limited the dynamic datasets to encompass only 100 ROIs. The distribution of classes for each dataset, as well as the values for regression tasks, have been visualized and are presented in Figure <ref>.
§.§ GNN^* and Dynamic Graph Baselines
Our study also explores a variation of residual GNNs, we named GNN^*, the model that leverages both residual connections and a feature concatenation approach, enhancing the utilization of the functional connectome in the training process. As delineated in Section 3.4 and visualized in Figure 2 of the main paper, GNN^* employs a universal graph convolution layer, facilitating the use of any GNN convolution contingent on the project's requirements. Similarly, the dynamic graph baseline (depicted in Figure 2 of the main paper) also uses a general graph convolution, followed by a Transformer module. Throughout our experimentation, we employed UniMP with GNN^* and tested five models using the dynamic baseline, the results of which are tabulated in Table 6 of the main paper. All other parameters remain consistent with the detailed exposition in the experimental setup (Section 5.1) of the main paper.
§ MEMORY AND RUNNING TIME ANALYSIS
Following a comprehensive and rigorous exploration of the search space, we have identified and established optimal datasets that strike a balance between minimizing memory requirements and maintaining an effective quantity of parameters. The trade-off achieved ensures that models are able to run smoothly on machines with reasonable computing power on our datasets, making them highly accessible to a wide range of users. This optimization also yields the additional benefit of reduced training times; our models are capable of training in mere minutes, significantly accelerating the model development cycle and promoting rapid iterative progress.
The specifics of this optimization are illustrated in the context of Unified Message Passing (UniMP) model <cit.>, which we use
to showcase the efficient resource usage of our datasets and approach. In Table <ref>, we offer detailed insights into the running times and memory requirements of
UniMP model. We executed UniMP on each dataset for 100 epochs and recorded both GPU memory utilization and overall training time, which includes data loading. The number of hidden units for the GNN layer was 32 and 128 for the MLP layers. These data points provide a tangible representation of the efficiency gains achieved through our dataset size optimization process. Such optimizations are instrumental in ensuring
datasets are not only computationally effective using any model but also highly accessible, enabling broader applicability for
a variety of hardware configurations. All
experiments were executed on a system equipped with an Intel(R) Xeon(R) Gold 6238R CPU operating at 2.20GHz with 112 cores, 512 GB of RAM, and an NVIDIA A40 GPU with 48GB of memory.
§ MODELS PERFORMANCE AND STANDARD ERROR
We plot the accuracy along with the standard deviation of 10 runs, each with different seeds, for all the models on three distinct datasets: HCP-Activity, HCP-Age and HCP-Gender in Figure <ref>. We observed that the results reported a higher level of stability on both HCP-Activity and HCP-Age datasets. This indicates that the models performed consistently and yielded more reliable results, suggesting a greater degree of confidence in the accuracy measurements. On the HCP-Gender dataset, we observed slightly high standard errors across the models. Moreover, we provide the visualization of the hidden activations obtained from the last layer of GNN^* for the test and validation sets trained on HCP-Activity and HCP-Gender datasets in Figure <ref>. We used TSNE for these visualizations.
|
http://arxiv.org/abs/2306.05929v2
|
20230609144115
|
Efficient operator method for modelling mode mixing in misaligned optical cavities
|
[
"William J. Hughes",
"Thomas H. Doherty",
"Jacob A. Blackmore",
"Peter Horak",
"Joseph F. Goodwin"
] |
physics.optics
|
[
"physics.optics",
"quant-ph"
] |
[email: ][email protected]
Department of Physics, University of Oxford, Clarendon Laboratory, Parks Rd, Oxford, OX1 3PU, UK
Department of Physics, University of Oxford, Clarendon Laboratory, Parks Rd, Oxford, OX1 3PU, UK
Department of Physics, University of Oxford, Clarendon Laboratory, Parks Rd, Oxford, OX1 3PU, UK
Optoelectronics Research Centre, University of Southampton, Southampton SO17 1BJ, UK
[email: ][email protected]
Department of Physics, University of Oxford, Clarendon Laboratory, Parks Rd, Oxford, OX1 3PU, UK
The transverse field structure and diffraction loss of the resonant modes of Fabry-Pérot optical cavities are acutely sensitive to the alignment and shape of the mirror substrates. We develop extensions to the `mode mixing' method applicable to arbitrary mirror shapes, which both facilitate fast calculation of the modes of cavities with transversely misaligned mirrors and enable the determination and transformation of the geometric properties of these modes. We show how these methods extend previous capabilities by including the practically-motivated case of transverse mirror misalignment, unveiling rich and complex structure of the resonant modes.
Efficient operator method for modelling mode mixing in misaligned optical cavities
J. F. Goodwin
July 31, 2023
==================================================================================
§ INTRODUCTION
The majority of Fabry-Pérot optical cavities have mirrors with sufficiently constant curvature to be described well by standard resonator theory <cit.>. However, there are applications of cavities with non-spherical mirrors for which standard theory is not suitable. As a first example, the desire to realise stronger light matter coupling, whether to increase the rate of single photon sources <cit.> or to observe light-matter hybridisation <cit.>, has led to the use of microcavities <cit.>; specialist fabrication techniques, such as laser ablation <cit.> or chemical etching <cit.>, that can manufacture the requisite highly curved micromirrors typically produce mirrors that are not perfectly spherical <cit.>. Secondly, in cavity optomechanics, the advantages conferred by low-mass mirrors encourage lightweight designs with limited diameter <cit.>. Finally, cavities with non-spherical mirrors offer useful optical capabilities, for example flexibility to tailor the optical mode <cit.> or utilise polarisation properties <cit.>.
As such experiments mature towards applications, it is important to calculate the required precision for transverse mirror alignment; For the spherical mirror case, there are simple methods for calculating the resonant modes under transverse mirror misalignment <cit.>, but these do not necessarily apply well to cavity mirrors with alternative shapes. This paper details extensions to the mode mixing method (Kleckner et al. <cit.>), allowing for certain mirror shapes to be encoded without numerical integration, and for arbitrary mirror shapes to be transversely misaligned without further integration. These advances greatly reduce, and potentially eliminate, the computation devoted to numerical integration, allowing for the impact of transverse misalignment in cavities with deformed mirrors to be investigated thoroughly.
First, we present an intuitive geometric optics approach to predicting the modes of cavities with misaligned and non-spherical mirrors. We then overview the existing mode mixing method before detailing extensions that greatly simplify the calculations required to model particular mirror shapes, and to include transverse mirror misalignment. We then discuss geometric transformations of cavity modes that can be used to interpret calculation outputs. Finally we compare these methods to existing techniques, demonstrating good agreement with published results for Gaussian-shaped mirrors in aligned configurations while additionally permitting the easy exploration of the impact of mirror misalignment. In a further publication <cit.>, we use the methods developed in this manuscript to examine the behaviour of cavities with spherical and Gaussian mirrors under transverse misalignment.
§ GEOMETRIC ANALYSIS OF MODE DEFORMATION
Before introducing our novel approach to mode mixing calculations in cavities with deformed mirrors and residual misalignment, we review the problem with a simple geometric optics picture that serves to highlight the physics of misaligned cavities in a more intuitive, albeit less complete, manner. In this `geometric' approach to determining the cavity modes, the propagation axis of the mode must intersect both mirrors normal to their surface, so that the mode is perfectly retroreflected. The phase curvature of the cavity mode at the intersection with each mirror is then matched to the local curvature of the mirror about the intersection point, as described in <cit.>. This condition determines the positions and sizes of the transverse waists of the cavity mode in both transverse directions.
We consider the features predicted when applying this approach to Fabry-Pérot cavities whose mirrors are transversely misaligned such that they are no longer coaxial. Although the method is applicable to very general mirror profiles, we will assume for simplicity that the mirror profile is a spherically symmetric depression, and we will illustrate the predicted phenomena using Gaussian shaped mirrors as a specific example, as depicted in Fig <ref>. Gaussian mirrors have a depth profile
f_G(x,y) = D[1-exp(-x^2+y^2/w_e^2)],
where x and y are Cartesian coordinates transverse to the mirror axis, D is the depth of the mirror, and w_e the 1/e waist. These parameters define the central radius of curvature R_c = w_e^2/2D. By convention, the depth profile is zero at the centre of the depression, and positive as the concave mirror protrudes towards the centre of the cavity.
Figure <ref>(a) shows the case of perfect alignment. The predicted mode lies along both (colinear) mirror axes, with the wavefront curvature at each mirror matching the centre radius of curvature R_c. The corresponding fundamental Gaussian mode can be calculated using standard spherical cavity theory <cit.>. Note that this yields a poor approximation of the fundamental mode if the mirror shape deviates significantly from spherical over the scale of the mode.
If the cavity mirrors are transversely misaligned, as shown Fig. <ref>(b), the cavity mode axis must tilt so that it can intersect both mirrors at normal incidence. This means that the local radius of curvature of the mirrors at the position of intersection may differ from R_c, producing a mode with a different waist compared to a cavity with aligned mirrors. Moreover, the local radius of curvature may differ in the two transverse directions making the cavity mode an elliptical Gaussian beam.
To analyse these effects quantitatively, we construct a coordinate system in which the centres of the two mirrors, labelled A and B, are placed at coordinates z_A=L/2 and z_B=-L/2 respectively along the z axis, where L=z_A-z_B is the cavity length and the z axis is the cavity axis in the aligned configuration. The misalignment direction is taken to define the x-axis, and thus the two mirrors are displaced by ±Δ x/2 in the x-direction respectively, as shown in Fig. <ref>b). The point P_A=(x_P_A,y_P_A,z_P_A) where the cavity axis intersects mirror A can be calculated from the requirement that the cavity axis is locally orthogonal to the mirror; with x_m=x_P_A-Δ x/2 defined as the distance of point P_A from the centre of the mirror, the solution satisfies
Δ x/2 = 2Dx_m/w_e^2 e^-x_m^2/w_e^2[L/2+D(-1+e^-x_m^2/w_e^2)] - x_m,
which can be solved numerically for x_m and then used to calculate the coordinates of P_A.
With the mode axis determined, the properties of the cavity mode can be simply derived. The effective length of the cavity mode between the intersections with the mirror is
= 2√(x_P_A^2+z_P_A^2).
The radius of curvature of the mirror at P_A in x direction is
R_x = [1+f_G^' 2(x_m, 0)]^3/2/f_G^''(x_m, 0),
f_G^'(x_m, 0) = 2Dx_m/w_e^2e^-x_m^2/w_e^2,
f_G^''(x_m, 0) =
D2/w_e^2e^-x_m^2/w_e^2[1-(√(2)x/w_e)^2]
where f_G^'(x,y) and f_G^''(x,y) are first and second derivatives of the mirror profile f_G(x,y) (Eq. <ref>) with respect to x. The radius of curvature in the y direction is
R_y = R_c e^x_m^2/w^2cosϕ + x_m sinϕ,
sinϕ = x_P_A/√(x_P_A^2+z_P_A^2),
where ϕ is the angle of the cavity mode axis with respect to the z axis. The central waists are
w_0,v = √(λ/2π)(2R_v/-1),
where v∈{x,y} specifies the transverse coordinate[The principal axes of the mode will be in the x and y directions because the transverse misalignment is x-directed.].
For large mirror misalignments, the mode axis may intersect the mirror sufficiently far from the central depression that the local profile is not concave, as shown Fig. <ref>(c). In this case the cavity is not able to stably confine a mode. For Gaussian mirrors, this occurs for misalignments Δ x exceeding Δ x_c at which x_m=± w_e/√(2).
A numerical case study applying this procedure to a cavity with Gaussian-shaped mirrors is presented in Fig. <ref>. This shows that, as the mirrors are misaligned, the mode angle and the position of intersection on the mirror deviate increasingly from their aligned values. The off-axis intersection means that the local radius of curvature at the intersection points increases in both x and y directions. However, the change is much larger in x direction. At the critical misalignment Δ x_c (44.0 for the parameters of Fig. <ref>), the mode intersection point is sufficiently far from the centre of the mirror that the local mirror surface is not concave. This means that the cavity is unstable, and one would expect to observe a severe drop in finesse.
This `geometric' analysis of the fundamental mode limits itself to cavity modes with quadratic wavefront curvature, and therefore does not take account of the mirror shape beyond its local gradient and radius of curvature. Though the mirror surface can always be approximated as parabolic close enough to the intersection point, the geometric analysis becomes unsuitable when the mode is sufficiently wide on the mirror that higher-order components of the profile become significant. To calculate cavity modes for cases where the mirror profile is not perfectly parabolic about the mode intersection points, we must use a framework with the flexibility to model cavity modes with more general wavefront curvature profiles.
§ EXTENDED MODE MIXING METHOD
§.§ Mode Mixing Introduction
The mode mixing method <cit.> finds the stable modes of cavities with deformed mirrors by expressing propagating fields as linear superpositions of Gaussian modes. This method has been applied to microcavities with non-spherical mirrors, finding sporadic, severe drops in cavity finesse at particular cavity lengths due to resonant mixing of the basis modes <cit.>. Alternatively, mode mixing can be harnessed to increase coupling of cavity fields to single emitters <cit.>, introduce coupling between optical resonators <cit.> or tailor cavity modes to have desired properties <cit.>. Standard mode mixing theory is introduced in this section, before extensions to facilitate the calculations, particularly in the context of misaligned cavities, are presented.
In principle, a propagating electric field satisfies Maxwell's equations. Typically, these equations are simplified by employing the paraxial approximation, which assumes that the propagating field is beam-like and directed at small angles to the nominal z axis. Under these assumptions (see <cit.>, with which the notation presented is consistent), the electric field can be described via a scalar function u^± through
E(x,y,z,t) = ϵ u^±(x,y,z)exp(∓ ikz)exp(iω t),
where ω is the angular frequency, k=ω/c the wavevector, ϵ the constant linear polarisation of the field, which must lie in a plane perpendicular to the z-axis, and ± denotes propagation towards positive or negative z respectively. The function u^± satisfies the paraxial wave equation
zu^±(x,y,z) = ∓i/2k([2]x + [2]y)u^±(x,y,z).
In the mode mixing formalism, an electromagnetic field propagating along the z axis according to Eq (<ref>) is expressed as a linear superposition of modes u^±_s(x,y,z), which themselves satisfy the paraxial equation, where s is an index over all the modes in the basis. An optical element is encoded as a matrix whose elements are scattering amplitudes from ingoing modes in the ingoing basis to outgoing modes in the outgoing basis. In the case of a concave mirror illuminated at normal incidence, the input and output basis states counterpropagate and the mirror profile imprints a differential phase across the wavefront due to the variation in propagation distance to and from the mirror. The components of a mirror matrix A (B) at positive (negative) z coordinate may be written
A_s,t = ∫_S_A u^-*_s(x,y,z_A) exp(2ik f_A(x,y)) u^+_t(x,y,z_A) dS,
B_s,t = ∫_S_B u^+*_s(x,y,z_B) exp(2ik f_B(x,y)) u^-_t(x,y,z_B) dS,
where k is the wavevector of the light, z_A (z_B) is the axial coordinate of the centre of the depression of mirror A (B), f_A (f_B) is the surface profile of mirror A (B) (with the convention that a positive profile points towards the cavity centre for both mirrors) and S_A (S_B) is the surface region of mirror A (B). The surface integrals are each performed in a single transverse plane at the axial coordinate for which f_A is zero. A schematic diagram illustrating how a mirror transfers amplitude from the input basis to the output basis is shown in Fig. <ref>. Cavity eigenmodes are specific linear superpositions of basis states that are preserved after one round trip of a cavity.
In this manuscript, the basis states used to express the cavity function u^±(x,y,z) are the Hermite-Gauss modes
u^(±)_n_x,n_y(x,y,z) = a(z) H_n_x(√(2)x/w(z))H_n_y(√(2)y/w(z))
exp[-x^2+y^2/w(z)^2] exp[∓ ikx^2+y^2/2R_u(z)]exp[± i(n_x+n_y+1)Ψ_G]
where
a(z) = 1/w(z)√(2/π1/2^n_x+n_y n_x!n_y!),
w(z) = w_0 √(1+(z/z_0)^2),
z_0 = π w_0^2/λ,
R_u(z) = z(1+(z_0/z)^2), Ψ_G(z) = arctan(z/z_0),
where the wavelength λ=2π/k, H_i are the Hermite polynomials with n_x, n_y ∈ℕ the x and y transverse indices, and z_0 is the Rayleigh range of the beam. This basis is complete and orthonormal for each transverse plane separately. A cavity function expressed as a linear superposition of these basis modes retains its mode coefficients during propagation, as the propagation of the field is encoded in the z-dependence of the basis functions themselves.
The round-trip matrix can be calculated from the two mirror matrices, accounting for the round trip phase accumulated during propagation:
M = BAe^-2ikL.
A mode |Ψ_i⟩ supported by the cavity is an eigenmode of the round-trip matrix M, and has corresponding eigenvalue γ_i from the eigenmode equation
M|Ψ_i⟩ = γ_i|Ψ_i⟩.
The complex γ_i has both phase and amplitude. The complex phase is the round-trip phase (modulo 2π) accrued by |Ψ_i⟩, which is zero on resonance. For typical applications where the length can be tuned freely to match a given resonance, the amplitude is more pertinent as it leads directly to the round-trip loss RT = 1-γ_i^2.
The eigenmodes of cavities with deformed mirrors can be determined by calculating elements of mirror matrices A and B through integration of Eq. (<ref>). A sensible approach to calculating the eigenmodes of cavities with transverse misalignment would therefore appear to calculate the geometrically-expected mode as a function of misalignment using the theory of Sec. <ref>, and use this mode to define the basis of the mode mixing calculation. Using this approach, the basis of the mode-mixing calculation is always chosen to suit the geometric model, and therefore it should be easier to faithfully capture the cavity eigenmodes with a relatively limited basis size.
However, performing calculations this way uses a different basis for every misalignment and cavity length. Therefore, all of the matrix elements are calculated for each cavity configuration separately. An alternative approach, discussed for the remainder of this section, uses matrix operations to misalign the mirrors without changing their calculation basis, thus removing the need to explicitly encode the mirror profiles for every misalignment.
§.§ Replacing Coordinates with Operators
The long-appreciated similarities between the Hermite-Gauss modes and simple harmonic oscillator wavefunctions <cit.> inspire the writing of transverse coordinates x (y) and transverse derivatives ∂/∂ x (∂/∂ y) in terms of the ladder operators a_x (a_y), where a_x (a_y) reduces the n_x (n_y) index of the Hermite Gauss mode by 1. Such operator methods have already been used to determine the eigenmodes of optical cavities under particular circumstances <cit.>. According to the conventions of the present analysis, the operators for x and ∂/∂ x in a given transverse plane are
x^(±)(z) = U_G^(±)(z)^†1/2w(z)(a_x+a^†_x)U_G^(±)(z),
∂/∂ x = 1/w_0(a_x-a^†_x),
(U_G)^(±)(z)_n_x',n_y',n_x,n_y = e^± iΨ_G(z)(n_x+n_y+1)δ_n_x',n_xδ_n_y',n_y,
where n_x and n_y (n_x' and n_y') are the x and y indices of the input (output) modes of the matrix respectively. While the ∂/∂ x operator does not depend on propagation direction and is constant across all transverse planes, the matrix elements of x depend upon the z coordinate and the propagation direction. The equivalent relations hold for y and ∂/∂ y, with a_y (a^†_y) replacing a_x (a^†_x). The derivations are detailed in App. <ref>.
A mirror imprints a phase front onto and reflects the ingoing mode (as expressed in Eq. (<ref>)). To construct mirror matrices in an operator-based approach, it is conceptually simpler to consider this process sequentially (taking mirror A as the example case): First, the phase front exp(2ikf_A) is imprinted on the input basis, where the phase is no longer a complex function of coordinates x and y, but an operator acting on the input basis as a result of its composition in the coordinate operators x and y. Secondly, the reflected field, thus far expressed through coefficients in the input basis, is transferred to coefficients in the output basis through operator
U^+→ - = (U_G^(+))^2 exp(-2ik(x^(+))^2 + (y^(+))^2/2R_u(z_A)),
for mirror A and
U^-→ + = (U_G^(-))^2 exp(-2ik(x^(-))^2 + (y^(-))^2/2R_u(z_B)),
for mirror B, where R_u(z_A) and R_u(z_B) depend upon the chosen basis. This basis is most conveniently chosen so that the wavefront radius of curvature R_u(z_A) (R_u(z_B)) matches the radius of curvature R_A (R_B) of the quadratic component of the profile of mirror A (B). This choice uniquely specifies the basis, and is assumed for the remainder of the text. The mirror matrix A can then be expressed
A = (U_G^(+))^2 exp(-2ikΔ_A^(+)),
where Δ_A = f_A - (x^2 + y^2)/2R_A is the deviation of the profile of mirror A from the ideal parabolic surface[In the paraxial approximation, the mathematically ideal mirror profile is parabolic. Outside this approximation, a spherical mirror is is often a better match for the phase fronts <cit.>.]. If Δ_A (Δ_B) can be evaluated as a matrix without taking integrals, the mirror matrix A (B) can also be obtained without integrals, as discussed later in Sec. <ref>.
§.§ Calculating Polynomial Mirror Surface Profiles
For the case where Δ can be written as a power series in x and y, it is only necessary to calculate matrices of the various powers of x and y and sum each polynomial term with the appropriate coefficient. For the case of a parabolic distortion, the mirrors remain parabolic but with an adjusted radius of curvature, and therefore the cavity eigenmodes should match standard results. We have used this to test and validate our approach.
§.§ Calculating the Gaussian Surface Profile
The Gaussian surface profile can also be expressed in the Hermite-Gauss basis without taking integrals, but this requires a different approach, inspired by the appendix of <cit.> and detailed in App. <ref>. The matrix elements of a unit Gaussian profile with 1/e waist w_e in a one-dimensional Hermite-Gauss basis at axial coordinate z can be written
exp[-x^2/w_e^2]_m',m^(±)(z) = U_G^(±)(z)^†(1-χ)^-(m'+m+1/2)(χ/2)^m'-m/2√(m'!m!)∑_k=0^[m/2](χ^2/4)^k/(m'-m/2+k)!k!(m-2k)! U_G^(±)(z),
with
χ = -1/2w(z)^2/w_e^2,
where m (m') is the index of the ingoing (outgoing) mode, (m'-m)/2 is an integer and m' ≥ m. If (m'-m)/2 is not an integer, the matrix element is zero. If m > m', the symmetry exp[-x^2/w_e^2]_m',m = exp[-x^2/w_e^2]_m,m' should be used. The matrix U_G accounts for the Gouy phases of the basis states, as originally defined in Eq. (<ref>). The two-dimensional profile is obtained from the one-dimensional matrices by a simple tensor product.
The deviation matrix Δ of a Gaussian with depth D from the ideal parabolic surface is obtained from the matrix of the unit profile through
Δ^(±) = D(1-exp[-(x^(±))^2+(y^(±))^2/w_e^2])-(x^(±))^2+(y^(±))^2/2R,
with
D = w_e^2/2R,
where R is the radius of curvature at the centre of the Gaussian. The use of a single R in Eq. <ref> and Eq. <ref> imposes that the wavefront radius of curvature of the basis states matches the mirror radius of curvature in the central depression.
§.§ Taking the Exponent of the Surface Profile
Once the surface profile deviation Δ is expressed as a matrix, the surface profile phase matrix exp(-2ikΔ), which constitutes the non-trivial component of the mirror matrix (Eq. <ref>), can be calculated. It is tempting to calculate exp(-2ikΔ) through matrix exponentiation of -2ikΔ, but this method cannot model losses; as Δ is a Hermitian matrix, the matrix exponent is unitary, and therefore every eigenvalue of a mirror matrix obtained through matrix exponentiation has unit modulus, meaning that the mirror is lossless. No matter how large a basis is chosen, Δ never models processes representing transfer from inside to outside the basis, and therefore no mechanism exists for power to leave the cavity.
To take the exponential in a way that can model losses, A procedure is used which is conceptually similar to the non-Hermitian Hamiltonian approach to simulating quantum systems that is commonly used in cavity quantum electrodynamics <cit.>. The matrix Δ is first evaluated in a basis larger than the intended simulation basis, before being truncated to the size of the simulation basis according to specific rules: Each element of Δ represents a transfer from an input state to an output state. If the input state lies within the simulation basis, but the output state is outside, that element encodes loss. Therefore, for each input state, the sum over all the magnitudes of transfers to states outside the basis is calculated, evaluating the amplitude leakage from the input basis state to outside the simulation basis. This summed rate is then added as a negative imaginary number onto the diagonal element of the input state. When the matrix exponential is then taken, this diagonal imaginary component causes loss rather than amplitude transfer.
Expressed mathematically, for a larger basis containing n_x and n_y up to maximum values of n_x^H and n_y^H respectively, and the smaller simulation basis up to maximum values of n_x^NH and n_y^NH respectively, components of the non-Hermitian Δ matrix are written
Δ^(±)_n_x',n_y',n_x,n_y = Δ^H(±)_n_x',n_y',n_x,n_y, n_x',n_x≤ n_x^NH, n_y',n_y≤ n_y^NH, δ_n_x',n_xδ_n_y',n_y = 0,
Δ^(±)_n_x,n_y,n_x,n_y = Δ^H(±)_n_x,n_y,n_x,n_y + i ∑_n_x' = (n_x^NH+1)^n_x^H ∑_n_y' = (n_y^NH+1)^n_y^H |Δ^H(±)_n_x',n_y',n_x,n_y|,
where Δ^H(±) is the Hermitian surface profile deviation matrix evaluated on the larger basis. The matrix exponential of the non-Hermitian Δ is then taken to find surface profile phase matrix exp(-2ikΔ).
While this process is not mathematically identical to finding the true matrix exp(-2ikΔ), in practice, this procedure produces almost identical loss results to numerical integration for most cavity configurations, as shown later in Sec. <ref>.
§.§ Translating the Mirror
With the surface profile phase matrix calculated, it is possible to evaluate both mirror matrices and thus obtain the eigenmodes for a cavity. To investigate the impact of transverse misalignment between the mirrors, the mirror matrices could be calculated for every misalignment separately. An alternative, discussed in this section, is to evaluate the mirror matrix in one transverse position (most conveniently the aligned configuration where any symmetries of the mirror profile can be exploited) and use translation operators to model transverse misalignment without calculating any further mirror matrix elements directly.
As depicted in Fig. <ref> the action on a given input field of a mirror translated by δ_x in the x direction is equivalent to the action of the untranslated mirror on the same input field displaced by -δ_x, because these two cases describe the same physical situation for different choices of origin. This equivalence means that the matrix of the translated mirror can be calculated by taking the matrix of the untranslated mirror and translating the input and output bases in the compensating direction.
The one-dimensional operator that translates the input and output bases is
T(δ) = exp(δ∂/∂ x),
with elements
T(δ)_m',m = √(m!/m'!)α^m'-m e^-α^2/2L_n^m'-m(α^2) , m' ≥ m,
T(δ)_m',m = √(m'!/m!)(-α^m-m') e^-α^2/2L_m^m-m'(α^2) , m > m',
where
α = δ/w_0,
where m (m') is the input (output) index in the one-dimensional basis and δ the translation effected by the operator. This operator is identical to the displacement operator of the simple harmonic oscillator <cit.>, owing to the close similarity between the simple-harmonic and Hermite-Gauss bases. As the translation operator has the same elements in the input and output bases, translating a mirror with matrix C by δ_x can be achieved through
C → T_x^†(-δ_x) C T_x(- δ_x),
where T_x is formed from the tensor product of the one-dimensional translation in the x direction and the identity in the y direction. If scanning the misalignment of the mirrors, the translation matrix need only be calculated for a single increment, and then successively applied to generate all of the mirror matrices. In this way, the mirror profile and translation step matrices both need only be calculated once.
§.§ Mode transformations
In addition to the x-translation operator T_x discussed in the previous section, further transformation operators can be specified. Here, we present transformation operators to change the central waist of a mode, and to change its propagation angle. In the context of the current work, these operators are used not to calculate the cavity eigenmodes, but to evaluate geometric properties of these eigenmodes, as will be discussed in Sec. <ref>.
§.§.§ Changing the Mode Waist
To calculate the coefficients of a mode with a different centre waist, we use the property that the Hermite-Gauss modes have the same functional form as the simple harmonic oscillator wavefunctions at the axial centre of the mode (z=0). Therefore, the operator that changes the central waist of the mode is the same as the operator that rescales the coordinate operators of the simple harmonic oscillator, namely the standard squeeze operators. The operator that changes the waist in the x-direction from w_0 to w_1 is
S_x (w_1/w_0) = exp[-1/2r(a_x^2 - (a_x^†)^2)],
r = -log[w_1/w_0].
The use of this operator to expand the waist of a fundamental mode is depicted in Fig. <ref>. The same form of operator applies in the y-direction for creation (annihilation) operator a^†_y (a_y).
§.§.§ Changing the Mode Angle
Finding the transformation operator to rotate the direction of propagation of the field is considerably more involved. This is because rotating an optical field E(x,y,z,t) is not equivalent to rotating all of the basis states {u_nm(x,y,z)} due to two main complications. Firstly, as the mode envelope is rotated, the implicit axial phase exp(∓ ikz) must rotate with it. This `hidden' component will turn out to be the quantitatively dominant component of the rotation matrix. Secondly, while the optical field is a vector quantity, mode mixing is a scalar theory, with the polarisation ϵ factoring out. The rotation operator in the mode mixing formalism rotates only the scalar field, whereas in a vector theory the rotation operator would also rotate the direction of the vector field.
With those complications noted, the operator to rotate the propagation direction can be derived. We consider an optical field E(x,y,z,t), which is a function of coordinates x, y and z. Next we define a new Cartesian coordinate system in which the axes have been rotated about the y-axis to yield
x' = xcos(ϕ_x) + z sin(ϕ_x), y'=y, z' = zcos(ϕ_x) - x sin(ϕ_x).
The same optical field can be expressed in the new coordinate system through the function E'(x',y',z',t). The function E' encodes the same field as E, but, in its basis, the propagation direction is rotated towards the x' axis in the x'-z' plane. Therefore the transformation that takes the function E to E' is the operator for the propagation direction rotation, provided the coordinate arguments to both functions are the same. The coordinate systems used to derive the propagation direction-rotation operator, and the application of this operator to rotate the propagation direction of a mode, are depicted in Fig. <ref>.
The equivalence of E and E' in real space means that
E'(x',y',z',t) = E(x,y,z,t).
Now, we assume that the rotation angle is small, and thus denoted δϕ_x. As, in the conventions of this manuscript, the mode coefficients are not functions of the axial coordinate, any axial coordinate z' could be chosen, but for algebraic convenience we choose the z'=0 plane. A first order approximation yields
x=x', y =y', z=x'δϕ_x,
E'(x',y',z'=0,t) = E(x=x',y=y',z=x'δϕ_x,t).
Remembering that the electric field E is described by mode function u^(±) through Eq. (<ref>) (and equivalently for E' and u'^(±))
u'^(±)(x',y',z'=0) = u^(±)(x=x', y=y', z=x'δϕ_x)exp(∓ ik(z=x'δϕ_x)).
Using the first order expansions in δϕ_x we obtain
u'^(±)(x,y,0) = [1+xδϕ_x (∓ ik + ∂/∂ z)]u^(±)(x,y,0),
where x'=x and y'=y have been used to unify the function arguments. This therefore expresses the transformation of the basis functions associated with infinitesimal rotation of the electric field.
For finite rotations, the infinitesimal operator can be applied successively, and existing results can make the final form more useful. Firstly, the x-operator in the z=0 plane is x|_z=0=(kw_0/2)(a_x+a_x^†) (see Eq. (<ref>)) Secondly, the basis functions satisfy the paraxial equation (Eq. (<ref>)), and substituting the transverse derivative operators from Eq. (<ref>) leads to the xz propagation direction operator
P_xz^(±)(ϕ_x) = exp[∓ iϕ_xkw_0/2{(a_x + a_x^†)(1+1/(kw_0)^2[(a_x - a^†_x)^2+ (a_y - a^†_y)^2])}],
where the exponential is evaluated using the methods introduced in Sec. <ref>. Extending this form to more general changes to the propagation direction requires care, but, for the purposes of the analysis in this manuscript, the direction of transverse misalignment defines the x axis, and therefore the propagation direction must lie in the xz plane.
§.§ Calculating mode angles
Finally, before effecting the mode rotations of Sec. <ref>, it is often useful to determine the propagation angle of the mode, which can be determined by calculating the expectation value of the angle operator
ϕ_x^(±) = (∓ i/k)∂/∂ x,
= (∓ i/(k w_0))(a_x-a^†_x),
which is valid in the paraxial approximation. The eigenstates of this operator are plane waves propagating at angle ϕ_x to the z axis in the xz plane. This capability is useful to understand properties of resonant modes for misaligned cavity configurations.
§ DEMONSTRATING THE METHOD
§.§ Selecting the mode of interest
The mode mixing method produces a set of cavity eigenmodes {|Ψ_i⟩} and corresponding eigenvalues {γ_i}. The important data within these sets are the mode profile and round trip loss of the particular eigenmode that will be used in the application at hand, and therefore a `mode of interest' should be identified. For the majority of applications using spherical cavities, the fundamental mode is more useful than the higher order transverse modes. When the cavity mirrors are transversely misaligned or non-spherical, we expect the propagation angle and central waist of the fundamental mode to change (see Sec <ref>). Therefore, for this investigation, the eigenmode chosen is the |Ψ_i⟩ that maximises the overlap ⟨Ψ_i||Ψ_0, 0^G⟩^2 with the geometrically expected mode denoted |Ψ_0, 0^G⟩.
The geometric expectation |Ψ_0, 0^G⟩ has thus far been parameterised through the propagation direction and the central waists in two principal directions, whereas the cavity eigenmodes {|Ψ_i⟩} are expressed as coefficients in a basis propagating along the z axis. To find the overlap of the cavity eigenmodes with the expected mode, the cavity eigenmodes were expressed in the same basis as the expected mode by first expanding/contracting in the two transverse directions independently to set the waists, and then rotating the mode in the xz plane to set the propagation direction, according to the methods of Sec. <ref>.
§.§ Comparing to standard methods
Results obtained using the procedure for constructing mirror matrices using operators (presented in Sec. <ref>) were compared with those found in the literature for the case of a Gaussian-shaped mirror (Fig. <ref> a)-d)). The round trip loss was calculated as a function of cavity length for three different Gaussian waist values using both methods. Due to the different calculation bases employed by the methods, the results are not expected to be identical, but should agree up to convergence effects. As shown in Fig. <ref> e) and f) for the vast majority of cases, the methods predict round trip losses with a fractional difference between one hundredth and unity; discrepancies that are practically indiscernible amidst order of magnitude variations described in the data. The exceptions to this are highly concentric configurations, where there is a substantial difference between the losses predicted.
The methods presented for translating the mirror matrices (Sec. <ref>) enable the data generated for aligned configurations to be simply extended to misaligned configurations (Fig. <ref> g)-i)). This capability allows for the round trip loss of cavities with transverse misalignment to be properly simulated, unveiling a rich structure of lossy `bands' in the length-misalignment parameter space that split into multiplets as the misalignment increases. Many of these bands can be traced back to loss peaks in the length scan of the aligned configuration, but some (such as the high loss bands in Fig. <ref>h) appearing to originate from small misalignment at L/R=1.5m for D=5m) cannot. This implies that residual misalignment introduces mechanisms of loss that do not feature for perfectly aligned cavities. A detailed discussion of the physics of misaligned cavities is beyond the scope of this paper, but will instead be the subject of a future publication.
§ CONCLUSION
We have developed methods to calculate the modes of cavities with non-spherical and transversely misaligned mirrors. We used a classical ray model to predict the mode axis and central waist of the resonant mode of a misaligned cavity, using these results to understand the output of a more complete mode mixing method. This method is inspired by existing techniques that exploit well-known operator forms and transformation matrices to model mode mixing in cavities with different mirror profiles, and to simply extend these models to include mirror misalignment.
The theory introduced in this paper is applicable to a variety of mode-mixing scenarios. Firstly, for particular mirror shapes where the deviation from the ideal parabolic profile can be expressed as a sum of polynomials in the transverse coordinates, or as a Gaussian function, the mode mixing matrix is calculated using analytical results and a matrix exponential, removing the need for any overlap integrals of the basis functions with the mirror profile to be taken. Secondly, once the mirror matrix has been obtained, the mirror can be translated using operators (which also do not require integrals to be calculated). This allows for the cavity mode structure under transverse misalignment to be determined in a simple manner and, in our experience, more quickly than with conventional techniques.
We anticipate the methods developed in this work will find application in the simulation of optical resonators with non-spherical mirrors, particularly for cases where the transverse misalignment of the mirrors is not negligible. An analysis of cavities with Guassian-shaped mirrors utilising the methods of this work will be the subject of a future publication.
This work was funded by the UK Engineering and Physical Sciences Research Council Hub in Quantum Computing and Simulation (EP/T001062/1) and the European Union Quantum Technology Flagship Project AQTION (No. 820495). The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work. http://dx.doi.org/10.5281/zenodo.22558. Data underlying the results presented in this paper are available in Ref. DOI added on acceptance. The code that generated the data may be obtained from the authors at reasonable request.
unsrt
36
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Siegman(1986)]Siegman:86
authorA. E. Siegman,
titleLasers (publisherUniversity Science
Books, year1986).
[Buckley et al.(2012)Buckley, Rivoire,
and Vučković]Buckley:12
authorS. Buckley,
authorK. Rivoire, and
authorJ. Vučković,
journalReports on Progress in Physics
volume75, pages126503
(year2012).
[Flatten
et al.(2016a)Flatten, He, Coles, Trichet,
Powell, Taylor, Warner, and Smith]Flatten:16_2
authorL. C. Flatten,
authorZ. He,
authorD. M. Coles,
authorA. A. P. Trichet,
authorA. W. Powell,
authorR. A. Taylor,
authorJ. H. Warner,
and authorJ. M.
Smith, journalScientific Reports
volume6, eid33134
(year2016a).
[Li et al.(2019)Li, Li, Cai, Li, Tang,
and Zhang]Li:19
authorF. Li,
authorY. Li,
authorY. Cai,
authorP. Li,
authorH. Tang, and
authorY. Zhang,
journalAdvanced Quantum Technologies
volume2, pages1900060
(year2019).
[Hunger et al.(2010)Hunger, Steinmetz,
Colombe, Deutsch, Hänsch, and Reichel]Hunger:10
authorD. Hunger,
authorT. Steinmetz,
authorY. Colombe,
authorC. Deutsch,
authorT. W. Hänsch,
and authorJ. Reichel,
journalNew Journal of Physics volume12,
pages065038 (year2010).
[Trupke et al.(2005)Trupke, Hinds,
Eriksson, Curtis, Moktadir, Kukharenka, and Kraft]Trupke:05
authorM. Trupke,
authorE. A. Hinds,
authorS. Eriksson,
authorE. Curtis,
authorZ. Moktadir,
authorE. Kukharenka,
and authorM. Kraft,
journalApplied Physics Letters volume87,
pages211106 (year2005).
[Muller et al.(2010)Muller, Flagg,
Lawall, and Solomon]Muller:10
authorA. Muller,
authorE. B. Flagg,
authorJ. R. Lawall,
and authorG. S.
Solomon, journalOpt Lett
volume35, pages2293 (year2010).
[Uphoff et al.(2015)Uphoff, Brekenfeld,
Rempe, and Ritter]Uphoff:15
authorM. Uphoff,
authorM. Brekenfeld,
authorG. Rempe, and
authorS. Ritter,
journalNew Journal of Physics volume17,
pages013053 (year2015).
[Biedermann et al.(2010)Biedermann,
Benito, Fortier, Stick, Loyd, Schwindt, Nakakura, Jarecki Jr, and
Blain]Biedermann:10
authorG. Biedermann,
authorF. Benito,
authorK. Fortier,
authorD. Stick,
authorT. Loyd,
authorP. Schwindt,
authorC. Nakakura,
authorR. Jarecki Jr,
and authorM. Blain,
journalApplied Physics Letters volume97,
pages181110 (year2010).
[Aspelmeyer et al.(2014)Aspelmeyer,
Kippenberg, and Marquardt]Aspelmeyer:14
authorM. Aspelmeyer,
authorT. J. Kippenberg,
and
authorF. Marquardt,
journalRev. Mod. Phys. volume86,
pages1391 (year2014).
[Kleckner et al.(2006)Kleckner, Marshall,
de Dood, Dinyari, Pors, Irvine, and Bouwmeester]Kleckner:06
authorD. Kleckner,
authorW. Marshall,
authorM. J. A. de Dood,
authorK. N. Dinyari,
authorB.-J. Pors,
authorW. T. M. Irvine,
and
authorD. Bouwmeester,
journalPhys. Rev. Lett. volume96,
pages173901 (year2006).
[Karpov et al.(2022)Karpov, Kurdiumov,
and Horak]Karpov:22b
authorD. V. Karpov,
authorS. Kurdiumov,
and authorP. Horak,
journalarXiv:2202.03359 (year2022).
[Walker et al.(2021)Walker, Ash, Trichet,
Smith, and Nyman]Walker:21
authorB. T. Walker,
authorB. J. Ash,
authorA. A. P. Trichet,
authorJ. M. Smith, and
authorR. A. Nyman,
journalOpt. Express volume29,
pages10800 (year2021).
[Buters et al.(2016)Buters, Weaver,
Eerkens, Heeck, de Man, and Bouwmeester]Buters:16
authorF. M. Buters,
authorM. J. Weaver,
authorH. J. Eerkens,
authorK. Heeck,
authorS. de Man, and
authorD. Bouwmeester,
journalPhys. Rev. A volume94,
pages063813 (year2016).
[Gao et al.(2023)Gao, Blackmore, Hughes,
Doherty, and Goodwin]Gao:23
authorS. Gao,
authorJ. A. Blackmore,
authorW. J. Hughes,
authorT. H. Doherty,
and authorJ. F.
Goodwin, journalPhys. Rev. Appl.
volume19, pages014033
(year2023).
[Kleckner et al.(2010)Kleckner, Irvine,
Oemrawsingh, and Bouwmeester]Kleckner:10
authorD. Kleckner,
authorW. T. M. Irvine,
authorS. S. R. Oemrawsingh,
and
authorD. Bouwmeester,
journalPhys. Rev. A volume81,
pages043814 (year2010).
[Hughes et al.(2023)Hughes, Doherty,
Blackmore, Horak, and Goodwin]Hughes:23_2
authorW. J. Hughes,
authorT. H. Doherty,
authorJ. A. Blackmore,
authorP. Horak, and
authorJ. F. Goodwin,
journalarXiv:2306.05894 (year2023).
[Blows and Forbes(1998)]Blows:98
authorJ. L. Blows and
authorG. Forbes,
journalOpt. Express volume2,
pages184 (year1998).
[Yariv(1991)]Yariv:91
authorA. Yariv,
titleQuantum Electronics (publisherWiley,
addressNew York, year1991).
[Benedikter et al.(2015)Benedikter,
Hümmer, Mader, Schlederer, Reichel, Hänsch, and
Hunger]Benedikter:15
authorJ. Benedikter,
authorT. Hümmer,
authorM. Mader,
authorB. Schlederer,
authorJ. Reichel,
authorT. W. Hänsch,
and authorD. Hunger,
journalNew Journal of Physics volume17,
pages053051 (year2015).
[Benedikter et al.(2019)Benedikter,
Moosmayer, Mader, Hümmer, and Hunger]Benedikter:19
authorJ. Benedikter,
authorT. Moosmayer,
authorM. Mader,
authorT. Hümmer,
and authorD. Hunger,
journalNew Journal of Physics volume21,
pages103029 (year2019).
[Podoliak et al.(2017)Podoliak,
Takahashi, Keller, and Horak]Podoliak:17
authorN. Podoliak,
authorH. Takahashi,
authorM. Keller, and
authorP. Horak,
journalJournal of Physics B: Atomic, Molecular and Optical
Physics volume50, pages085503
(year2017).
[Karpov and Horak(2022a)]Karpov:22
authorD. V. Karpov and
authorP. Horak,
journalPhysical Review A volume105,
pages023515 (year2022a).
[Karpov and Horak(2022b)]Karpov:22a
authorD. V. Karpov and
authorP. Horak,
journalNew Journal of Physics volume24,
pages073028 (year2022b).
[Flatten
et al.(2016b)Flatten, Trichet, and
Smith]Flatten:16
authorL. C. Flatten,
authorA. A. P. Trichet,
and authorJ. M.
Smith, journalLaser & Photonics Reviews
volume10, pages257
(year2016b).
[Barré et al.(2017)Barré,
Romanelli, Lebental, and Brunel]Barre:17
authorN. Barré,
authorM. Romanelli,
authorM. Lebental, and
authorM. Brunel,
journalEuropean Journal of Physics volume38,
pages034010 (year2017).
[Stoler(1981)]Stoler:81
authorD. Stoler,
journalJournal of the Optical Society of America
volume71, pages334 (year1981).
[Nienhuis and Allen(1993)]Nienhuis:93
authorG. Nienhuis and
authorL. Allen,
journalPhys. Rev. A volume48,
pages656 (year1993).
[Habraken and Nienhuis(2007)]Habraken:07
authorS. J. M. Habraken
and authorG. Nienhuis,
journalPhys. Rev. A volume75,
pages033819 (year2007).
[Jaffe et al.(2021)Jaffe, Palm, Baum,
Taneja, and Simon]Jaffe:21
authorM. Jaffe,
authorL. Palm,
authorC. Baum,
authorL. Taneja, and
authorJ. Simon,
journalPhys. Rev. A volume104,
pages013524 (year2021).
[van Exter et al.(2022)van Exter, Wubs,
Hissink, and Koks]vanExeter:22
authorM. P. van Exter,
authorM. Wubs,
authorE. Hissink, and
authorC. Koks,
journalPhys. Rev. A volume106,
pages013501 (year2022).
[Varró(2022)]Varro:22
authorS. Varró,
journalNew Journal of Physics volume24,
pages053035 (year2022).
[Kuhn(2015)]Kuhn:15
authorA. Kuhn, in
booktitleEngineering the Atom-Photon Interaction
(publisherSpringer, year2015), pp.
pages3–38.
[Cahill and Glauber(1969)]Cahill:69
authorK. E. Cahill and
authorR. J. Glauber,
journalPhys. Rev. volume177,
pages1857 (year1969).
[Laabs and Friberg(1999)]Laabs:99
authorH. Laabs and
authorA. T. Friberg,
journalIEEE Journal of Quantum Electronics
volume35, pages198 (year1999).
[Schwinger and Englert(2001)]Schwinger:01
authorJ. Schwinger and
authorB. Englert,
titleQuantum Mechanics: Symbolism of Atomic Measurements
(publisherSpringer, year2001).
§ DERIVATION OF OPERATORS IN HERMITE GAUSS BASIS
To derive the operator forms of x and ∂/∂ x, we start by comparing the mode amplitude of the basis states introduced in Eq. (<ref>)
u^(±)_n_x,n_y(x,y,z) = a(z) H_n_x(√(2)x/w(z))H_n_y (√(2)y/w(z))
exp[-x^2+y^2/w(z)^2] exp[∓ ikx^2+y^2/2R_u(z)]exp[± i(n_x+n_y+1)Ψ_G],
with the mode of the quantum harmonic oscillator of mass m and resonant frequency Ω
ψ^HO_n_x,n_y(x,y) = H_n_x(√(mΩ/ħ)x)H_n_y(√(mΩ/ħ)y)exp[-mΩ(x^2+y^2)/2ħ].
The quantum harmonic oscillator has operators
x^HO = √(ħ/2mΩ)(a_x^HO+(a_x^HO)^†),
∂/∂ x^HO = √(mΩ/2ħ)(a_x^HO-(a_x^HO)^†),
shown in terms of the harmonic annihilation operator a_x^HO <cit.>. In the case that the parameters of the harmonic oscillator and Gaussian mode are related by mΩ/2ħ=1/w^2, the respective wavefunctions are related by
u^±_n_x,n_y(x,y) = ψ(x,y)^HO_n_x,n_yexp[∓ ikx^2+y^2/2R]exp[± i(n_x+n_y+1)Ψ_G].
Therefore, the x operator in the cavity mode basis set can be found in terms of the x operator in the harmonic oscillator basis
x^(±)_n_x',n_y',n_x,n_y = ∫_S u^±*_n_x', n_y'(x,y) x u^±_n_x,n_y(x,y) dxdy,
x^(±)_n_x',n_y',n_x,n_y = exp[± iΨ_G(n_x+n_y-n_x'-n_y')]∫_S ψ^* HO_n_x',n_y' (x,y)x ψ^HO_n_x,n_y(x,y) dxdy,
x^(±)_n_x',n_y',n_x,n_y = x^HO_n_x',n_y',n_x,n_yexp[± iΨ_G(n_x+n_y-n_x'-n_y')].
The analogy between the wavefunctions then leads to
x^(±) = (U_G^(±))^†1/2w(a_x+a^†_x)U_G^(±),
(U_G)^(±)_n_x',n_y',n_x,n_y = δ_n_x', n_xδ_n_y',n_yexp[± iΨ_G(n_x+n_y+1)],
where a_x is the annihilation operator in the x-direction for the mode functions u^(±)_n_x,n_y(x,y,z), which acts equivalently to the a_x^HO operator on the harmonic oscillator wavefunctions.
A similar approach can be used for the ∂/∂ x operator
∂/∂ x^(±)_n_x',n_y',n_x,n_y = ∫_S u^±*_n_x',n_y'(x,y) ∂/∂ x u^±_n_x,n_y(x,y) dxdy,
∂/∂ x^(±)_n_x',n_y',n_x,n_y = exp[iΨ_G(n_x+n_y-n_x'-n_y')] ×
∫_S ( ψ^*HO_n_x',n_y'(x,y) ∂/∂ xψ^HO_n_x,n_y(x,y)) +( ψ^*HO_n_x',n_y'(x,y) (∓ ikx/R)ψ^HO_n_x,n_y(x,y) ) dxdy,
∂/∂ x^(±) = (U_G^(±))^†(∂/∂ x^HO∓ik/Rx^HO)U_G^(±),
resulting in the expression
∂/∂ x=(U_G^(±))^†(1/w(a_x-a^†_x) ∓ iwk/2R(a_x+a^†_x))U_G^(±),
which can be converted algebraically to the more convenient form
∂/∂ x=(1/w_0(a_x-a^†_x)).
§ FINDING THE GAUSSIAN PROFILE MATRIX
The formula for the Gaussian profile surface matrix in the Hermite-Gauss basis (Eq. (<ref>)) is calculated following the method of appendix A of <cit.>. To evaluate a unit-depth one-dimensional Gaussian as a matrix, we start by expanding using the transverse coordinate operator of Eq. (<ref>)
exp[-x^2/w_e^2] = (U_G^(±))^†exp[χ(1/2(a^†)^2+1/2(a^†)^2+1/2(aa^†+a^†a))] U_G^(±),
= (U_G^(±))^†exp[χ(K_++K_-+2K_0)] U_G^(±),
χ = -w^2/2w_e^2,
where K_+, K_- and K_0 are (1/2)(a^†)^2, (1/4)(a^†)^2, and (1/2)(aa^†+a^†a) respectively, and the annihilation operator a represents a_x (a_y) for the x (y) directed Gaussian function. Now use that K_+, K_- and K_0 have the same commutation relations as -σ_+, σ_- and (1/2)σ_3, where
-σ_+ = [ 0 -1; 0 0 ], σ_- = [ 0 0; 1 0 ], 1/2σ_3 = 1/2[ 1 0; 0 -1 ].
Next, we equate the coefficients of the exponent and normal-ordered exponents of the 2-dimensional matrices, where the normal form has coefficients ζ, ζ ' and η
exp[ζ(-σ_+)] exp[-η(σ_3)] exp[ζ ' (σ_-)] = exp[χ(-σ_+) + χ(σ_-) + 2χ(σ_3)].
Expanding the two sides of this equation gives
ζ = ζ ' = χ/1-χ, η = 2ln(1-χ).
The 2-dimensional matrices are substituted back for creation and annihilation operators to obtain
exp[χ(K_++K_-+2K_0)] =
exp[χ/1-χ1/2(a^†)^2] exp(-2ln(1-χ)1/4(a a^† + a^†a ))exp(χ/1-χ1/2(a)^2).
The normal operator form can be evaluated simply in the Hermite Gauss basis to obtain the result quoted in Sec. <ref>:
exp(-x^2/w_e^2)^(±)_m',m =
(U_G^(±))^†(1-χ)^-(m'+m+1/2) (χ/2)^m'-m/2√(m'!m!)∑_k=0^[m/2](χ^2/4)^k/(m'-m/2+k)!k!(m-2k)! U_G^(±),
χ = -1/2w(z)^2/w_e^2.
|
http://arxiv.org/abs/2306.07144v1
|
20230612142841
|
Strategic Communication and Deliberation on Climate Change of different Actor Groups using Twitter
|
[
"Julian Dehne",
"Valentin Gold"
] |
cs.SI
|
[
"cs.SI"
] |
10000
10000
=10000
=9000
=8000
Candidate Incentive Distributions: How voting methods shape electoral
incentives
Marcus Ogren
July 31, 2023
================================================================================
Strategic communication in Twitter is compared between different actor groups with regard to the topic of climate change. The main hypothesis is that different actor groups will be more or less central in the reply-trees depending on their strategic interests based on their profession or organizational affiliation.
§ INTRODUCTION
Following the SARS-CoV-2 epidemic, a lot of attention has been drawn to the sinking effectiveness of institutionalized communication in liberal democracies in the case of issues that have a complex scientific background. In parts this can be attributed to a fractured public forum and reduced trust in the credibility of experts. But there is a need to measure this perceived trend or it should be disregarded as speculation. In order to formalize a relationship that describes a trend for this institutionalised communication, the sender of the communication needs to be defined more narrowly and the expected effect, too. For this purpose the concept of strategic communication is introduced. Strategic communication <cit.> needs to be conceived not only regarding its penetration but also on how stable the newly created information bubbles are. Another factor is the complexity of the scientific underpinnings of the message that needed to be heard. For this reason the focus will be on strategic scientific communication of the politically polarizing topic (such as vaccination, migration, climate change …) that are complex but need a functioning discursive space in order to allow for social change. In order to narrow down the actors and relevant language, climate change was picked as a case study.
After conceptualizing the starting point as strategic communication, the range of the effect needs to be mapped out and reduced to an observable subset of the public sphere. In order to access the quantifiable part of the public communication sphere, social media text data is used (for example from Twitter, Reddit). In the end, the most important question is what constitutes successful strategic communication in the context specified (social media, polarizing scientific complex topics, institutionalized actors) if this phenomenon is to be measured at all.
One important part of this question is how the success of strategic communication should be conceived. <cit.> argues that the strategic turn has reduced participation to a means and not a goal in itself. For example, a politician would use scientific facts rhetorically, in order to achieve some hidden agenda. Whether intended or not it will be assumed that the actor's strategic interest lies in some motivation to raising awareness of scientific facts, motivate social change according to these facts, and stimulate discourse in order to legitimate decisions based on them.
§ EPISTEMIC BUBBLES AND STRATEGIC SCIENTIFIC COMMUNICATION
Idealist political philosophy proposes that every opinion must have access to and be evaluated rationally by a power-free discourse <cit.>. Even pragmatic or rational choice approaches assume a minimum of extended rationality in the pursue of the player’s interests. However, both contraptions seem out of date considering the rise of echo chambers and epistemic bubbles <cit.>.
An epistemic bubble is a social epistemic structure in which other relevant voices have been left out, perhaps accidentally. An echo chamber is a social epistemic structure from which other relevant voices have been actively excluded and discredited <cit.>.
Although Nguyen makes the valid point that echo chambers should be treated as a different phenomenon, both can be treated as roadblocks in strategic communication. Epistemic bubbles also include filter bubbles <cit.> <cit.> <cit.>, <cit.>, where automated personalization leaves digital citizens stranded on an island of their own beliefs. More generally, epistemic bubbles are the limits even an approximately informed <cit.> citizen has when it wants to learn about public events. Conspiracy theories are particularly problematic from a governmental standpoint <cit.> <cit.>. These are echo chambers that repeat beliefs that are not true according to the institutional position. These bubbles can be treated as opposites to peer-reviewed expert communication.
The concept of a bubble has a psychological, a sociological, and a sociometric model attached. From the psychological perspective, there are natural limits to information processing in the human mind. These limits are adequately modeled by the cognitive load theory <cit.>. The more fractured the public space becomes, the harder it is to follow the discourse, even if the motivation was high. The sociological perspective is called "'groupstrapping"' <cit.>. Boyd extends the communication model from Nguyen and adds group effects into the equation. Finally, the sociometric definition defines the bubbles as social network structures <cit.>. The communication flows are paths along the edges. The vertices are the manifestations of the information in communicative action. The edges are the people that communicate. Where the discourse perspective makes it possible to connect to rhetorical studies and audience use, the network model allows for concepts from computer science to spill over, which is necessary to study the effects of the algorithms involved in personalization and the spread of news.
In order to measure the effectiveness of strategic communication, the classical approach addresses the effect the particular expert communication had on the public opinion. For instance, if a politician spoke out pro vaccination
the effectiveness would be measured in terms of how the public opinion on vaccinations has changed or how many people got vaccinated. However, this kind of model creates a black box around the area where the strategic communication attempt
was filtered: the social media. Although social media platforms do not bare the sole responsibility for shaping the perception of the communicated information nor are they the only social force that influences the epistemic bubbles they are of interest as they represent a view on the communication process that can be accessed directly in terms of data acquisition without creating an observer bias.
This way, the effect of strategic communication can be operationalized as the size of the created epistemic bubble <cit.> and its characteristics.
The latter will be analyzed from two perspectives. First, it will be (re-)constructed as a conjunctive room of experience <cit.> to generate a socio-genetic typology of the bubbles. Second, a bubble will be characterized as a social network structure <cit.>. Here linguistics and natural language programming are applied.
However, the qualities of group communication structures in social media do not directly transfer to a measure of political strategic communication. Strategic communication can be viewed as successful in two dimensions. First, reaching a high percentage of citizens and second, stimulating meaningful and productive discourses that legitimate political decision making in democracies. Here, deliberation theory can be applied in order to quantify the effects in terms of improving discourses and transparent penetration of the public sphere <cit.> .
Another aspect is the question of the content of the message send by the strategic communication act. <cit.> discuss several social changes that influence the communication of facts that are based on scientific research:
* the possibility for scientists to communicate directly to the public
* the competition of non-founded information
* the influence of the social media platforms and their algorithm
* the competition for trust and simplicity
Regarding the topic climate change <cit.> provides a comprehensive view on the different angles the connection between climate change and Twitter have been investigated: In general, Politicians' use of tweets is less investigated. <cit.> sees signs that arguing is not one of the normal uses of Twitter for U.S. Congress members (presenting information is the strongest). The general picture seems to be that politicians use of Twitter both in qualitative as well as quantitative studies is focussed on transparancy, mini-press releases, outreach and sharing information <cit.>.
A clear quantitative measure of a certain strategic communication group is the number of actual replies in a conversation thread. Here only 3.7 % of tweets counted of members of offices are replies <cit.>. This showcases that the strategic intentions and use of social media differ between groups of actors. This motivates the goal of this paper: mapping out the main actors on strategic scientific communication in relevant fields and analyzing the different ways they approach deliberation on social media.
Assuming that these results can be generalized to other countries (i.e. Germany) and politicians that are concerned with the policy in question (climate change) the pattern of low engagement in discussions and a high ration of broadcast communication can be assumed.
hypHypothesis
Politicians have a higher focus on outreach than other groups.
Scientists have a higher focus on information sharing than other groups.
Activists have a higher focus on debating issues than other groups.
Journalists' communication style includes debating issues but also information sharing and outreach.
Governmental organisations behave similar to politicians in their communication style.
Politicians/Governmental Organisations, scientists, journalists and activists can be clustered according to their conversational styles.
The actor groups are defined by the sample drawn which included categorizing the different accounts. Further than that outreach, debating issues and information sharing need to be measured in order to test the hypotheses.
Outreach can be approximated using the out-degree of the nodes in the reply-tree, the root dominance[The Engagement API from Twitter is currently Business Only which prevents using views as another measurement]. Of course, the outdegree assumes that a high number of replies corresponds with the intention of achieving the latter. Posting original messages rather than entering an existing conversation does not proof the intend of outreach but it may be seen as a strong indicator. Furthermore, the intention of broadcasting for the purpose of outreach corresponds negatively with engagement in debate (or with low number of posts).
Debating issues can be measured much more directly: high engagement is assumed to correspond to high author vision[Defined as the aggregated likelihood of an author having seen previous posts in a conversation. This measure was developed and published by the authors and is available on arXiv.org: Julian Dehne and Valentin Gold (2023), Consistent, Central and Comprehensive Participation on Social Media], a high overall centrality of the posts in the author graph and a high quantity of posts.
Information sharing can be measured by the comparatively longer texts, the presence of links and a higher depth of the conversation with a lower outdegree of the informational post. Longer texts and the presence of links make sense intuitively. Having fewer replies but longer discussions differentiates the intention of sharing information that lead to new discussions versus broadcasting information for the sake of outreach.
§ STUDY DESIGN
§.§ Sampling
The sample of this paper consists of 182 twitter users in total, separated into the groups politicians, activists, scientists, governmental organisations, non-governmental organisations and journalists.
There are 24 politicians in the sample, 9 of which are tweeting in English and 15 in German. Most of the German users are part of the Green Party.
Furthermore there are 12 English speaking and 13 German speaking users that are considered climate activists. They are usually connected to Fridays For Future or other climate activist groups such as Ende Gelaende or Extinction Rebellion and claim to be climate activists in their twitter bio.
This paper also looks at 34 scientists, 15 of which are English speaking and 19 German. These scientists usually have a P.h.d. in Meteorology, Biology or other natural sciences.
The sample also includes 29 governmental organisations, 5 of which are part of the German government and 24 are tweeting in English. Most of the latter are part of International Unions such as the UN or the EU, or part of English speaking governments.
45 of the users in our sample are considered non-governmental organisations. In the context of the discussions of climate change, climate activist groups such as Fridays For Future, Extinction Rebellion, Ende Geleande and Greenpeace are also considered NGOs. In fact, Fridays For Future's local groups take up most of the NGO-Sample. 37 of the NGOs are tweeting in German, the rest in English. The Sample for NGOs also includes informational twitter accounts that regularly tweet about climate change, such as 'taz Klima', an account run by the German newspaper 'taz'.
The last group in the sample are journalists. This paper looks at 25 different journalists, 10 of which are tweeting in English and 15 are tweeting in German. They are writing for different reliable news papers and most of them specialize in topics about climate change.
The following restrictions were in place during download:
* The root post, the post at the beginning of the downloaded conversation, was written by one of the climate authors.
* The conversation is longer than 5 posts.
* The root post has to contain a word usually used in the context of the discussion of climate change
* The root author's profile is public
These restrictions are meant to rule out those conversations that are too short to analyse and find conversations that revolve around topics of climate change. 14 authors (8 organisations, 3 journalists, 2 activists and one scientist) are excluded from the sample because none of their tweets fit the criteria. 8234 conversations containing around 1,8 million tweets are analysed in the following steps. Around 23000 of the tweets were written by climate change authors.
§.§ Conversational Properties
Before explaining the approach taken other more common alternatives need to be discarded: studying the conversational style with a qualitative approach leads to a detailed and succinct picture of conversational practices. For instance <cit.> looks at three political leaders in comparison. However, for a comparison of groups of actors this creates to big of a workload. Another typical way to analyze intention and patterns of social media engagement is natural language processing. Although there most certainly will be markers of outreach or raising awareness within the language used the model would need training and would only be applicable to a specific platform (Twitter) and a general field pertaining to the topic in question. Conversely, the reply-tree is a platform-independent unit of analysis. It is also agnostic towards the topic or writing styles involved.
The reply-tree itself can indicate what kind of discussion is at hand. For instance, a discussion with a high depth (length of the longest reply-chain) more likely resembles an offline deliberation than a mushroom structure where there is one original post and many replies to this root post. Another metric is the root author dominance which defines the probability that any post of reply tree is authored by the root author. More often than not (and more true in Reddit than it is in Twitter) the root author dominates the discussion which leads to a reduced deliberative quality.
Using a combination of the reply-tree structure and the knowledge which author has written which post one can predict whether or not an author has seen a lot of the replies or not. The simple model uses to two assumptions mentioned previously: an author has seen a post with a probability depending on its distance from the root post combined with the distance of the next answer in the reply tree of the same author.
ζ := P(SEEN|(V_j,V_i))) = ∑(1/2)^(| path(V_j,V_i) |-1)
The equation <ref> reads as: the probability zeta of having seen node j is the average sum of the decay function of the path length between all the nodes i written by the given author and the node j. For example if the author has only written one subsequent answer to a post and this answer has a path distance of two replies to the post j than the probability of having seen j for the author would be 1/2^(2-1)=0.5. If the path distance was 1 for a direct reply the exponent would be 0 and the probability 1. This measure is computed as an average for all existing path between node j and nodes i of the given author. Analogously, the root distance can be defined as
ϑ := P(SEEN|(V_j,V_i))) = ∑(1/4)^(| path(root,V_j) |-1)
⇒ P(SEEN) = ζ∪ϑ
This measure models the distance to the root post as more relevant than the position of the reply. This is due to the dominant visual position of the original post in most platforms.
By using the authors as the nodes and one author having replied to another as directed edges one can derive a directed graph from the reply tree. Although this graph ignores the intensity of user interactions it represents the centrality of users within the conversation adequately. In social network theory there are three standard measures of centrality: in-betweenness centrality, closeness-centrality and Katz-centrality. In the context of this paper the centrality together with the number of posts can be used to indicate engagement and participation in the conversation. The lack thereof together with fewer posts can indicate information sharing behaviour as providing information in many cases may lead the thread to run dry as it does not stimulate an emotional response.
§ RESULTS
Figure <ref> shows the mean of the branching factor, depth, centrality, baseline vision and root dominance for each author group. In order to compare all these parameters we used min-max normalization and used the mean of the normalized variables.
There are a few indicators that hypothesis <ref> is true, that politicians do have a higher focus on outreach than other groups. The number of posts by the root author in each conversation is relatively low. However in relation to the number of all posts in a conversation (the 'root dominance') the number of posts by root authors who are politicians is neither particularly high nor low. These calculations show ambiguous results. The total count of tweets in conversations that were started by politicians is also relatively high: conversations by politicians contain an average of around 200 tweets. This supports the hypothesis <ref> because it suggests that politicians tweets spark interest.
Although scientists are not the biggest group in the sample, almost one third of the downloaded tweets were written by scientists. This could mean that scientists are more active than other groups or tweet more frequently. This supports hypothesis <ref>, that scientists are focused on sharing information. Around 50% of the tweets by scientists contain links which is an indicator that almost half of the downloaded tweets contain further information on the shared topic. However around 60% of the tweets by governmental as well as non-governmental organisations contain links. As figure 1 shows, the baseline vision of scientists is low, in comparison to other groups. This indicates that scientists are not particularly interested in the conversation following their tweets and are therefore not interested in discussing topics. Thus, the calculations do not show clear results for hypothesis <ref>.
Around 63% of the downloaded tweets that were written by activists mention other twitter users which supports the hypothesis <ref> that activists are focused on debating issues with others. Only around 35% of these tweets contain links. One could conclude that activists are not focused on sharing information. The tweets by the activists in our sample have a much higher number of replies than the other groups. On average there are 800 tweets in the conversations by activists. This might be due to outliers such as conversations by Greta Thunberg or Luisa Neubauer which usually have a very high engagement. These outliers were excluded from the sample for the visualisation in figure <ref>. Even without the outliers, activists have by far the highest reply count. As shown in figure 1, tweets by activists also have a high branching factor in comparison to other author groups. This shows that activists generally have a high engagement on their tweets. We conclude that activists are in fact more focused on debating issues than other groups and that the results support hypothesis <ref>.
Only around 40% of the tweets by journalists contain links, which might indicate that they are less focused on sharing information than in hypothesis <ref> presumed. The indicators that point to a debating conversational style are not found in the calculated variables for journalist. They are the group with the lowest baseline vision and the lowest centrality of all groups. Furthermore figure <ref> shows that journalists are the group with the highest root dominance. This indicates that they are not focused on outreach and more on debating issues since a high root dominance indicates that the root author is involved in the conversation that they started.
Tweets by governmental organizations have an average of 250 characters, whereas tweets by politicians have an average of 220 characters. 61% of the tweets by politicians contain links while 51% of the tweets by politicians do. Furthermore only 41% of tweets by politicians mention other twitter users, while 61% of the tweets of governmental organisations do. Other than expected in hypothesis <ref> governmental organisations and politicians do not behave similar in their communication style. Other indicators also point to this statement. Organisations have a much higher root dominance and author baseline vision than politicians do. In general the engagement in conversations by organisations is by far the lowest. On average they have the lowest amount of replies. The calculations show that politician and governmental organisations on average have very different results for almost every calculated variable, except for the number of posts by the root author in each conversation, which is relatively low for both groups. This supports the hypothesis that both groups are more focused on outreach.
§ CONCLUDING REMARKS
In conclusion, the study reveals that politicians' communication style aligns with outreach, activists focus on debating issues, and journalists have a more debating-oriented style. However, the results challenge the hypotheses regarding scientists' information sharing focus and the assumed similarity between governmental organizations and politicians' communication style. These findings provide insights into the communication patterns of different author groups on Twitter.
The paper also highlights the use of the the more comprehensive concept of actor involvement in social media drawing from strategic communication research and methods from social network analysis. Further research is required to investigate how these measures could be validated by non-supervised methods. For instances, if author vision and centrality would cluster actor groups reliably, these measures could be used to identify hitherto unknown actor groups and map out epistemic bubbles.
Another application of the methodology would be to analyze epistemic bubbles directly and compare the rhetoric of the leaders to the one of the outliers. In this case, the sampling would have to be focused on finding discussions that belong to a clearly defined bubble rather than using the actors as representative for their guild.
alpha
|
http://arxiv.org/abs/2306.03261v1
|
20230605212600
|
On Lagrange multipliers of the KKT system in Hilbert spaces
|
[
"Zhiyu Tan"
] |
math.OC
|
[
"math.OC"
] |
[
Y. Zhang and L. N. Cattafesta III
2023 June 05
=====================================
Abstract: In this paper we develop a new decomposition framework to deal with Lagrange multipliers of the Karush-Kuhn-Tucker (KKT) system of constrained optimization problems and variational inequalities in Hilbert spaces. It is different from existing frameworks based on separation theorems. We introduce the essential Lagrange multiplier and establish the basic theory of this new multiplier. The essential Lagrange multiplier poses essentially different existence results in finite and infinite-dimensional spaces. It can also be used to give an essential characterization of the convergence of multipliers generated by the classical augmented Lagrangian method. Our analysis reveals that the essential Lagrange multiplier is at the core of both theories and applications of Lagrange multipliers.
Keywords: Hilbert spaces, Constrained optimization, KKT system, Weak form asymptotic KKT system, Essential Lagrange multiplier, Classical augmented Lagrangian method.
Mathematics Subject Classification: 46N10, 49J27, 90C25, 90C46, 90C48.
§ INTRODUCTION
The KKT (Karush-Kuhn-Tucker) system and the related Lagrange multipliers are of great significance to the theories and algorithms of constrained optimization problems (cf. <cit.>).
In this paper we attempt to move away from the classical approach based on separation theorems to develop a new framework to investigate Lagrange multipliers of the KKT system of constrained optimization problems, which can also cover some cases of variational inequalities related to constrained optimization problems. In our approach we first construct a surrogate model that should share the same KKT system with the optimization problem at a given local minimizer by making use of the linearization of the problem, and then discuss the KKT system of the surrogate model at the minimizer. The approach here is based on a key observation that the KKT system only involves the linearized information of the optimization problem at a minimizer and is somehow a reformulation of the first order necessary condition with respect to the linearizing cone at the minimizer.
For the surrogate model, we will prove the following existence theorem (cf. Section <ref>).
For a given minimizer u^* of the constrained optimization problem (<ref>), the surrogate model exists for all f∈ℱ(u^*) at the minimizer if and only if Guignard's condition (<ref>) holds, where ℱ(u^*) is the set of all Fréchet differentiable objective functions which have a local constrained minimizer at u^*.
In general the surrogate model is a special case of the following model problem
minθ(u) Su ∈ K,
where 𝒰 and 𝒳 are real Hilbert spaces, ∅≠ K⊆𝒳 is closed and convex, and S is a bounded linear operator from 𝒰 to 𝒳. According to the Riesz representation theorem (cf. <cit.>), we set 𝒰' = 𝒰 and 𝒳' = 𝒳. We assume that the feasible set R(S)∩ K≠∅, where R(S) is the range of S and θ(u) is continuously Fréchet differentiable and strongly convex on 𝒰, i.e., there exists c_0>0 such that
⟨ u - v, D_uθ(u) - D_uθ(v)⟩_𝒰≥ c_0 u - v^2_𝒰,
where D_uθ(v) is the first order Fréchet derivative of θ(·) at v, ⟨·, ·⟩_𝒰 is the inner product of 𝒰 and ·_𝒰 is the induced norm. The inner product and the induced norm on 𝒳 are denoted by ⟨·, ·⟩ and · respectively.
It follows from the assumptions on the model problem (<ref>) and the classical convex optimization theory that there exists a unique global minimizer u^* of the model problem (<ref>). We will investigate the KKT system of the model problem at u^*, and the results can be applied to the surrogate model. To avoid separation theorems, we use an optimization procedure regularization approach to derive the KKT system at u^*, which will be realized by the classical augmented Lagrangian method (ALM, for short)(cf. <cit.>) in this paper. By carrying out the convergence analysis of the classical ALM without using any information of Lagrange multipliers of the model problem (<ref>) at u^*, we will prove the following theorem (cf. Appendix A).
A feasible point u^* is a global minimizer of the model problem (<ref>) if and only if there exists {λ^k}_k=1^+∞⊂𝒳 such that the following weak form asymptotic KKT system (W-AKKT, for short) holds
{ ⟨ D_uθ(u^*), v⟩_𝒰 + lim_k→ +∞⟨λ^k, Sv⟩ = 0 ∀ v∈𝒰;
- _k→+∞⟨λ^k ,ζ - Su^*⟩≥ 0 ∀ ζ∈ K.
.
Motivated by the results in Theorem <ref>, we introduce the essential Lagrange multiplier.
An element λ^*∈R(S) is called an essential Lagrange multiplier of the model problem (<ref>) at u^* if it satisfies
{ ⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ^*, Sv⟩ = 0 ∀ v∈𝒰;
- ⟨λ^* ,ζ - Su^*⟩≥ 0 ∀ ζ∈K∩ R(S),
.
where R(S) and K∩ R(S) are the closure of R(S) and K∩ R(S), respectively.
We also recall the definition of the proper Lagrange multiplier and the classical KKT system as follows (cf. <cit.>).
An element λ̅∈𝒳 is called a proper Lagrange multiplier of the model problem (<ref>) at u^* if it satisfies the classical KKT system
{ ⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ̅, Sv⟩ = 0 ∀ v∈𝒰;
- ⟨λ̅ ,ζ - Su^*⟩≥ 0 ∀ ζ∈ K.
.
Note that the essential Lagrange multiplier of the model problem (<ref>) is actually the proper Lagrange multiplier of the optimization problem
minθ(u) Su ∈K∩ R(S)⊂R(S),
which implies that the essential Lagrange multiplier is only related to the feasible set of the model problem (<ref>). This observation inspires us to investigate the proper Lagrange multiplier with the help of the essential Lagrange multiplier. More details on the essential Lagrange multiplier will be given in Section <ref>. These results indicate that the essential Lagrange multiplier is a fundamental concept in the theory of Lagrange multipliers of constrained optimization.
As an application of the essential Lagrange multiplier, we will consider the convergence of the multipliers generated by the classical ALM for the model problem (<ref>). Our results show some equivalence between the convergence of the multipliers and the existence of the essential Lagrange multiplier (see Theorem <ref>). This indicates that the essential Lagrange multiplier is of fundamental importance in the application of Lagrange multipliers.
The rest of the paper is organized as follows. In Section <ref> a general optimization problem and the related variational inequalities are given, and some results of the surrogate model are also presented there, especially the proof of Theorem <ref>. A thorough discussion of the essential Lagrange multiplier and the proper Lagrange multiplier of the model problem (<ref>) is included in Section <ref>. The results in this section also theoretically confirm the necessity of using asymptotic or approximate KKT systems (cf <cit.>) to give the optimality conditions of constrained optimization problems in infinite-dimensional spaces. Section <ref> is devoted to some further applications of the W-AKKT system. We give an elementary proof of the existence of the proper Lagrange multiplier under Robinson's condition that is widely used in the theories and applications of optimization problems in both finite and infinite-dimensional spaces (cf. <cit.>). An application of the theory to optimal control problems with pointwise constraints is given in Section <ref>. The paper ends up with some concluding remarks in Section <ref>. Details for the proof of Theorem <ref> are provided in Appendix <ref>.
In this paper we use the standard notations from functional analysis, convex analysis and partial differential equations, see for example in <cit.>.
§ A GENERAL OPTIMIZATION PROBLEM AND THE SURROGATE MODEL
Let us consider the following general optimization problem
min f(u) G(u) ∈𝒦,
where f: 𝒰→ℝ, G: 𝒰→𝒳, 𝒦 is a closed and convex set in 𝒳, and 𝒰, 𝒳 are two real Hilbert spaces. Assume that G(u)∩𝒦≠∅ and u^* is a minimizer of the optimization problem (<ref>). We will investigate the KKT system of the optimization problem (<ref>) at u^*. At present we assume that f and G are Fréchet differentiable, and we will extend the results to some nonsmooth cases later. As aforementioned, we will denote by ℱ(u^*) the set of all Fréchet differentiable objective functions which have a local constrained minimizer at u^*.
§.§ Some preliminary results
Let M be the feasible set, i.e.,
M = {u∈𝒰: G(u)∈𝒦}.
We denote by T(M, u̅), T_w(M, u̅) and L(𝒦, u̅) the sequential tangent cone, the weak sequential tangent cone and the linearizing cone at u̅∈ M respectively, which are defined by
T(M, u̅) = {v∈𝒰: ∃{u_n}⊂ M, {t_n}⊂ℝ^+, u_n →u̅, t_n→ 0^+, 1/t_n(u_n - u̅)→ v},
T_w(M, u̅) = {v∈𝒰: ∃{u_n}⊂ M, {t_n}⊂ℝ^+, u_n →u̅, t_n→ 0^+, 1/t_n(u_n - u̅)⇀ v}
and
L(𝒦, u̅) = {tv∈𝒰: G(u̅) + G'(u̅)v∈𝒦, ∀ t > 0}.
Let C⊂𝒰. The polar cone of C is defined by
C^∘ = {v∈𝒰 : ⟨ v, w⟩_𝒰≤ 0 ∀ w∈ C}.
The following property of the sequential tangent cone, the weak sequential tangent cone and the linearizing cone is crucial to establish our theory.
If G is a bounded linear operator, for any u̅∈ M, it holds
L(𝒦, u̅) = T(M, u̅) = T_w(M, u̅).
Let v∈ T(M, u̅), i.e., there exist {u_n}⊂𝒰 and {t_n}⊂ℝ^+ such that
v = lim_n→ +∞1/t_n(u_n - u̅), t_n→ 0^+, u_n∈ M,
and set v_n = u_n - u̅.
Since G is linear and u_n∈ M, we have
G(u̅) + G'(u̅)v_n = G(u̅ + v_n) = G(u_n)∈𝒦,
which implies 1/t_nv_n ∈ L(𝒦, u̅) and v∈L(𝒦, u̅) by v = lim_n→ +∞1/t_nv_n. This leads to
T(M, u̅) ⊂L(𝒦, u̅).
For any 0≠ v∈ L(𝒦, u̅), there exist v_0∈𝒰 and t_0 > 0, such that v = t_0v_0 and
G(u̅ + v_0) = G(u̅) + G(v_0) = G(u̅) + G'(u̅)v_0 ∈𝒦.
Let u_n = u̅ +1/nv_0. Since 𝒦 is convex, we have
G(u_n) = G(u̅ +1/nv_0) = n-1/nG(u̅) + 1/nG(u̅ + v_0) ∈𝒦,
which implies u_n ∈ M for any n∈ℕ^+.
Taking t_n = 1/(nt_0)>0, it follows
v = lim_n→ +∞1/t_n(u_n - u̅),
which gives v∈ T(M, u̅) and
L(𝒦, u̅)⊂ T(M, u̅).
Note that T(M, u̅) is closed (cf. <cit.>). Therefore,
L(𝒦, u̅) = T(M, u̅).
Since G is a bounded linear operator, M is closed and convex. This further implies
T(M, u̅) = T_w(M, u̅)
by Proposition 6.1 of <cit.>.
This completes the proof.
§.§ A first order necessary condition
According to the classical optimization theory, at the minimizer u^*, the following first order necessary condition holds (cf. <cit.>, <cit.> or <cit.>)
⟨ D_uf(u^*), v⟩_𝒰≥ 0 ∀ v∈ T_w(M,u^*),
which is equivalent to
-D_uf(u^*) ∈ T_w^∘(M,u^*).
We can also consider the variational inequality: Find u^*∈ M such that
⟨ F(u), v⟩_𝒰≥ 0 ∀ v∈ T_w(M, u),
where F: 𝒰→𝒰' (= 𝒰) is a given mapping. It is obvious that at a solution point u^*, there holds
⟨ F(u^*), v⟩_𝒰≥ 0 ∀ v∈ T_w(M, u^*),
which is in the same form of (<ref>).
Therefore, we can deal with these two problems in the same framework, and we will only give the arguments for (<ref>).
§.§ The surrogate model
In this part we will give the definition of the surrogate model at a minimizer of the optimization problem (<ref>) and prove a fundamental theorem in our theory, i.e., Theorem <ref>.
Note that the linearization problem of (<ref>) at u^* is
min f(u^*) + ⟨ D_uf(u^*), u - u^*⟩_𝒰 G(u^*) + G'(u^*)(u - u^*) ∈𝒦,
which is equivalent to
min f(u^*) + ⟨ D_uf(u^*), u - u^*⟩_𝒰 G'(u^*)u ∈𝒦 - G(u^*) + G'(u^*)u^*.
Let us consider the following optimization problem
min f(u^*) + ⟨ D_uf(u^*), u - u^*⟩_𝒰 + c/2u - u^*_𝒰^2 G'(u^*)u ∈ K,
where c>0 and K= 𝒦 - G(u^*) + G'(u^*)u^*. Note that the optimization problem (<ref>) is a special case of the model problem (<ref>).
The feasible set of this problem is
M̃ = {u∈𝒰: G'(u^*)u∈ K},
and we have some key observations
{
K - G'(u^*)u^* = 𝒦 - G(u^*);
G'(u^*)u^*∈ K ⟺ G(u^*)∈𝒦.
and
L(K,u^*)
= {tv∈𝒰: G'(u^*)u^* + G'(u^*)v∈ K, ∀ t > 0 }
= {tv∈𝒰: G'(u^*)u^* + G'(u^*)v∈𝒦 - G(u^*) + G'(u^*)u^*, ∀ t > 0}
= {tv∈𝒰: G(u^*) + G'(u^*)v∈𝒦, ∀ t > 0}
= L(𝒦, u^*).
The classical KKT system (<ref>) of the optimization problem (<ref>) holds at u^* with a proper Lagrange multiplier λ̅∈𝒳, i.e.,
{ ⟨ D_uf(u^*), v⟩_𝒰 + ⟨λ̅, G'(u^*)v⟩ = 0 ∀ v∈𝒰;
- ⟨λ̅, ζ - G'(u^*)u^*⟩≥ 0 ∀ ζ∈ K,
.
if and only if the classical KKT system of the optimization problem (<ref>) holds at u^* with the same proper Lagrange multiplier λ̅, i.e.,
{ ⟨ D_uf(u^*), v⟩_𝒰 + ⟨λ̅, G'(u^*)v⟩ = 0 ∀ v∈𝒰;
- ⟨λ̅, ζ - G(u^*)⟩≥ 0 ∀ ζ∈𝒦.
.
This follows from (<ref>) directly.
Note that if u^* satisfies (<ref>), u^* is the global minimizer of the optimization problem (<ref>).
Meanwhile, according to the convex optimization theory, u^* is the global minimizer of the optimization problem (<ref>) if and only if (cf. <cit.> or <cit.>)
⟨ D_uf(u^*), v⟩_𝒰≥ 0 ∀ v∈ T(M̃,u^*),
which is equivalent to
⟨ D_uf(u^*), v⟩_𝒰≥ 0 ∀ v∈L(𝒦,u^*)
by Lemma <ref> and (<ref>). That is
-D_uf(u^*) ∈L(𝒦, u^*)^∘ = L^∘(𝒦, u^*).
On the other hand, Lemma 4.2 of <cit.> states that
L^∘(𝒦, u^*) ⊂ T_w^∘(M,u^*).
Therefore, the first order necessary condition (<ref>) holds automatically if u^* is the global minimizer of the optimization problem (<ref>).
This inspires us to use the optimization problem (<ref>) to investigate the KKT system of the optimization problem (<ref>) at u^*.
For f∈ℱ(u^*), the optimization problem (<ref>) is called a surrogate model of the optimization problem (<ref>) at u^* if u^* is the global minimizer of the optimization problem (<ref>).
Theorem <ref> establishes the existence results of the surrogate model of the optimization problem (<ref>) at u^*. We recall here Guignard's condition (cf. <cit.>), which is
T_w^∘(M,u^*) = L^∘(𝒦, u^*).
§.§ Proof of Theorem <ref>
For any f∈ℱ(u^*), since u^* is a global minimizer of the optimization problem (<ref>), the condition (<ref>) should be held, i.e.,
-D_uf(u^*)∈ L^∘(𝒦,u^*),
which yields
Dℱ(u^*) ⊂ L^∘(𝒦,u^*).
Here Dℱ(u^*) = {-D_uf(u^*)∈𝒰: f∈ℱ(u^*)}.
Note that Theorem 3.2 of <cit.> gives
Dℱ(u^*) = T_w^∘(M,u^*).
Hence, T_w^∘(M,u^*) = Dℱ(u^*) ⊂ L^∘(𝒦, u^*).
Combining with (<ref>), we arrive at (<ref>).
For the other direction, (<ref>) holds, i.e.,
-D_uf(u^*)∈ T_w^∘(M,u^*)
for any f∈ℱ(u^*). If (<ref>) holds, we have (<ref>) holds, which implies u^* is a global minimizer of the optimization problem (<ref>). Hence, the optimization problem (<ref>) is a surrogate model of the optimization problem (<ref>) at u^*. This completes the proof.
§.§ Nonsmooth cases
In the previous arguments, we assume that f and G are Fréchet differentiable. It is worth noting that the theory in this paper can also be applied to some nonsmooth cases, for example the case where f and G are only semismooth. In this case, we can choose p_u^*∈∂ f(u^*) and S_u^*∈∂ G(u^*) to do the analysis. If there exist p_u^*∈∂ f(u^*) and S_u^*∈∂ G(u^*) such that
* ⟨ p_u^*, v⟩_𝒰≥ 0 ∀ v∈ T_w(M,u^*);
* T_w^∘(M,u^*) = L^∘(𝒦, u^*, S_u^*), where L(𝒦, u^*, S_u^*) is L(𝒦, u^*) with G'(u^*) = S_u^*,
then the surrogate model can be defined as
min f(u^*) + ⟨ p_u^*, u - u^*⟩_𝒰 + c/2u - u^*_𝒰^2 S_u^*u ∈ K = 𝒦 - G(u^*) + S_u^*u^*
for any c>0.
§ THE ESSENTIAL LAGRANGE MULTIPLIER
In this section we will establish the basic theory of the essential Lagrange multiplier and the proper Lagrange multiplier of the model problem (<ref>). As an application of the essential Lagrange multiplier, we will use it to characterize the convergence of the multipliers generated by the classical ALM (see (<ref>)).
Without further explanation, we always assume that u^* is the global minimizer of the model problem (<ref>) in this section.
§.§ The essential Lagrange multiplier
We will establish the existence and uniqueness theory of the essential Lagrange multiplier (see Definition <ref>) here. Our results indicate the existence theory of the essential Lagrange multiplier is different in finite and infinite-dimensional cases. More precisely, the essential Lagrange multiplier always exists in the finite-dimensional case, while in the infinite-dimensional case, this is no longer true.
The essential Lagrange multiplier is unique.
This can be derived from Definition <ref> directly. If there exist two essential Lagrange multipliers λ^*_1∈R(S) and λ^*_2∈R(S) at u^*, it holds
⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ^*_1, Sv⟩ = 0 ∀ v∈𝒰
and
⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ^*_2, Sv⟩ = 0 ∀ v∈𝒰,
which follows
⟨λ^*_1 - λ_2^*, Sv⟩ = 0 ∀ v∈𝒰.
This gives λ^*_1 = λ_2^*, and it completes the proof.
The essential Lagrange multiplier exists at the global minimizer u^* of the model problem (<ref>) if and only if
-D_uθ(u^*) ∈ R(S^*),
where S^* is the adjoint operator of S.
If the essential Lagrange multiplier λ^* exists, according to its definition, we have
-D_uθ(u^*) = S^*λ^*,
which means -D_uθ(u^*) ∈ R(S^*).
If -D_uθ(u^*) ∈ R(S^*), there exists λ̅^*∈𝒳 such that
-D_uθ(u^*) = S^*λ̅^*,
which is equivalent to
⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ̅^*, Sv⟩ = 0 ∀ v∈𝒰.
Since u^* is the global minimizer, by Theorem <ref>, there exists {λ^k}⊂𝒳 such that the W-AKKT system (<ref>) holds. Therefore,
lim_k→ + ∞⟨λ^k, Sv⟩ = ⟨λ̅^*, Sv⟩ ∀ v∈𝒰,
by (<ref>), and then
- ⟨λ̅^*, Sv - Su^* ⟩ = -lim_k→ + ∞⟨λ^k, Sv - Su^*⟩≥ - _k→ + ∞⟨λ^k, Sv - Su^*⟩≥ 0 ∀ v∈𝒰,
by (<ref>). Finally, by the boundedness of λ̅^*, we arrive at
- ⟨λ̅^*, ζ - Su^* ⟩≥ 0 ∀ ζ∈K∩ R(S),
which, together with (<ref>), implies that the restriction of λ̅^* to R(S) is the essential Lagrange multiplier. This gives the existence of the essential Lagrange multiplier.
If R(S) is closed in 𝒳, then the essential Lagrange multiplier exists. Conversely, if the essential Lagrange multiplier always exists at u^* for any K and θ(·) satisfying the assumptions of the model problem (<ref>), then R(S) is closed in 𝒳.
According to Theorem <ref>, there exists {λ^k}_k=1^+∞⊂𝒳 such that
lim_k→ +∞⟨ S^*λ^k + D_uθ(u^*), v ⟩_𝒰 = 0 ∀ v∈𝒰,
i.e., {S^*λ^k}_k=1^+∞ weakly converges to -D_uθ(u^*) in 𝒰.
If R(S) is closed, by the closed range theorem (cf. <cit.>), R(S^*) is closed, which further is weakly closed. Therefore, -D_uθ(u^*)∈ R(S^*) and then by Theorem <ref>, we have the existence of the essential Lagrange multiplier.
Let u_0∈Ker(S)^⊥ be arbitrary, θ(u) = 1/2u_𝒰^2 and K = {Su_0}. In this case the global minimizer u^* = u_0 and D_uθ(u^*) = u_0∈Ker(S)^⊥. Note that Ker(S)^⊥=R(S^*). Since u_0 is arbitrary, by the assumption on the existence of the essential Lagrange multiplier and Theorem <ref>, we have Ker(S)^⊥⊂ R(S^*), which gives R(S^*)⊂ R(S^*). Therefore, R(S^*) = R(S^*), which is equivalent to R(S) = R(S) by the closed range theorem, i.e., R(S) is closed.
Theorem <ref> implies that the condition that R(S) is closed in 𝒳 is sufficient and almost necessary for the existence of the essential Lagrange multiplier.
Note that if R(S) is a finite-dimensional space, R(S) is closed. We have the following corollaries.
If R(S) is a finite-dimensional space, then the essential Lagrange multiplier exists.
If 𝒳 is a finite-dimensional space, then the essential Lagrange multiplier exists.
The essential Lagrange multiplier can also be used to characterize the optimality of a feasible point.
If the essential Lagrange multiplier exits at a feasible point u^*, then u^* is the global minimizer of the model problem (<ref>).
According to the convex optimization theory, it suffices to show that (cf. <cit.>, <cit.> or <cit.>)
⟨ D_uθ(u^*), v - u^*⟩_𝒰≥ 0 ∀ Sv∈ K.
Let λ^* be the essential Lagrange multiplier. The definition of λ^* gives
⟨ D_uθ(u^*), v - u^*⟩_𝒰
= -⟨λ^*, S(v - u^*)⟩
= -⟨λ^*, Sv - Su^*⟩≥ 0 ∀ Sv∈ K,
which completes the proof.
Furthermore, Theorem <ref> and Theorem <ref> lead to the following corollary.
Assume that R(S) is closed in 𝒳, a feasible point u^* is the global minimizer of the model problem (<ref>) if and only if the essential Lagrange multiplier exists at u^*.
§.§ The proper Lagrange multiplier
According to the definition of the proper Lagrange multiplier (see Definition <ref>) and the definition of the essential Lagrange multiplier (see Definition <ref>), the existence of the proper Lagrange multiplier always implies the existence of the essential Lagrange multiplier and
λ^* = λ̅|_R(S),
where λ̅|_R(S) is the restriction of λ̅ to R(S). In other words, if the essential Lagrange multiplier does not exist, neither does the proper Lagrange multiplier. Therefore, we can consider the theory of the proper Lagrange multiplier under the assumption that the essential Lagrange multiplier exists, and which can be verified by the results in the previous subsection. We can also assume that λ^* ≠ 0. Otherwise λ̅ = 0 is a proper Lagrange multiplier. We will establish the existence and uniqueness theory of the proper Lagrange multiplier under Assumption <ref>.
If the essential Lagrange multiplier does not exist, neither does the classical KKT system. This happens in infinite-dimensional cases as shown in the previous subsection. Therefore, it is necessary to use the asymptotic or approximate KKT system to characterize the optimality in some infinite-dimensional cases, which has been used in the literature (cf. <cit.>) as a technique, but did not confirm its necessity theoretically.
The essential Lagrange multiplier λ^* exists at u^* and λ^* ≠ 0.
Let ζ^* = Su^* and 𝒩(ζ^*,K) be the normal cone to K at ζ^*, i.e.,
𝒩(ζ^*,K) = {λ∈𝒳: -⟨λ, ζ - ζ^*⟩≥ 0 ∀ ζ∈ K}.
Suppose that Assumption <ref> holds. The proper Lagrange multiplier exists at u^* if and only if there exist λ̃∈𝒩(ζ^*,K) and ζ̅_0∈R(S) such that
Ker(λ̃)∩R(S)= Ker(λ^*)∩R(S) ()
or equivalently
Span{λ̃} + Ker(S^*) = Span{λ^*} + Ker(S^*)
and
{ ⟨λ̃, ζ̅_0 ⟩ > 0,
⟨λ^*, ζ̅_0 ⟩ > 0.
.
()
A direct calculation gives
Ker(λ̃)∩R(S) = Ker(λ^*)∩R(S)
⟺ [Ker(λ̃)∩R(S)]^⊥ = [Ker(λ^*)∩R(S)]^⊥
⟺ cl{ [Ker(λ̃)]^⊥ + Ker(S^*)} = cl{[Ker(λ^*)]^⊥ + Ker(S^*)}
⟺ [Ker(λ̃)]^⊥ + Ker(S^*) = [Ker(λ^*)]^⊥ + Ker(S^*)
⟺ Span{λ̃} + Ker(S^*) = Span{λ^*} + Ker(S^*).
If the proper Lagrange multiplier λ̅ exists, we have λ̅∈𝒩(ζ^*,K) and λ^* = λ̅|_R(S), which follows
Ker(λ̅)∩R(S) = Ker(λ^*)∩R(S).
Since λ^*≠ 0 on R(S), there exists ζ_0∈R(S) such that
⟨λ^*, ζ_0⟩≠ 0.
We assume that
⟨λ^*, ζ_0⟩ > 0.
Otherwise, we can choose -ζ_0. Taking ζ̅_0 = ζ_0, the condition (<ref>) holds for λ̅.
Hence, we can take λ̃ = λ̅.
Now we prove the other direction.
Let λ̃ be an element in 𝒩(ζ^*,K) such that (<ref>) and (<ref>) hold.
Let
λ̅ = t_0λ̃,
where t_0 = ⟨λ^*, ζ̅_0⟩/⟨λ̃, ζ̅_0⟩ and ζ̅_0 satisfies (<ref>).
We will prove that λ̅ is a proper Lagrange multiplier. Note that the condition (<ref>) implies t_0>0, which further gives λ̅∈𝒩(ζ^*,K).
Now, by the definition of the proper Lagrange multiplier, we only need to show that
λ^* = λ̅|_R(S).
By (<ref>) and (<ref>), we have R(S) = Ker(λ^*)∩R(S) + Span{ζ̅_0} = Ker(λ̃)∩R(S) + Span{ζ̅_0}. Therefore, for any ζ∈R(S), there exist s∈ℝ and ζ_0∈Ker(λ̃)∩R(S) such that ζ = ζ_0 + sζ̅_0. It follows
⟨λ̅, ζ⟩ = t_0 ⟨λ̃, ζ⟩ = t_0 ⟨λ̃, ζ_0 + sζ̅_0⟩ = t_0s⟨λ̃, ζ̅_0⟩ = s⟨λ^*, ζ̅_0⟩ = ⟨λ^*, ζ_0 + sζ̅_0⟩ = ⟨λ^*, ζ⟩,
which implies λ^* = λ̅|_R(S).
Suppose that Assumption <ref> holds.
There exists a unique proper Lagrange multiplier at u^* if and only if (1) there exists λ̃∈𝒩(ζ^*,K) which satisfies (<ref>) and (<ref>), and (2) for any λ̂∈𝒩(ζ^*,K) which satisfies (<ref>) and (<ref>), it holds Ker(λ̂) = Ker(λ̃).
Since all the proper Lagrange multipliers belong to 𝒩(ζ^*,K) and their restrictions to R(S) (≠{0}) are the same, according to the proof of Theorem <ref>, we have λ̅ = ⟨λ^*, ζ̅_0⟩/⟨λ̃, ζ̅_0⟩λ̃ is the unique proper Lagrange multiplier.
Let λ̅ be the unique proper Lagrange multiplier. According to the condition (<ref>) of Theorem <ref>, λ̅∈𝒩(ζ^*,K), and (<ref>) and (<ref>) hold for λ̅.
Suppose that there exists λ̃∈𝒩(ζ^*,K) which satisfies (<ref>) and (<ref>), it holds Ker(λ̃) ≠Ker(λ̅). By Theorem <ref>, there exists a proper Lagrange multiplier λ̅_0 with Ker(λ̅_0)≠Ker(λ̅). This is a contradiction to the uniqueness of the Lagrange multiplier.
If 𝒳 = R(S) or equivalently Ker(S^*) = {0}, the proper Lagrange multiplier is the essential Lagrange multiplier, which implies that the proper Lagrange multiplier (if exists) is unique.
In the finite-dimensional case, the condition Ker(S^*) = {0} is the LICQ condition. The connections of the LICQ condition and the uniqueness of the proper Lagrange multiplier have been investigated in <cit.>. It has also been proved that the proper Lagrange multiplier is unique if and only if it satisfies SMFC in <cit.> for an optimization problem with both equality and inequality constraints under the assumption that the proper Lagrange multiplier exists. The uniqueness results for the case of general cone constraints can be found in <cit.>.
In the following part we will use the results in quotient spaces to simplify the presentation of the results in Theorem <ref> and Theorem <ref>.
Let Y = Ker(λ^*)∩R(S) and [𝒳] = 𝒳/Y be the quotient space with respect to Y. Since Y = Ker(λ^*)∩R(S) is a closed subspace of 𝒳, [𝒳] is a Banach space under the conventional norm
[ζ]_q = inf{ζ - ζ': ζ' ∈Ker(λ^*)∩R(S)}.
Let K be a set in 𝒳 and [K] = {[ζ]∈ [𝒳] : ζ∈ K }.
For any ζ∈𝒳, there exists a unique ζ_0∈ Y^⊥ such that
[ζ] = [ζ_0], [ζ]_q = ζ_0 ζ_0≤ζ.
This follows directly from the orthogonal decomposition 𝒳 = Y⊕ Y^⊥.
Let
𝒳_Y = { f∈𝒳 : Y⊂Ker(f) }.
It can be checked that 𝒳_Y is a closed subspace of 𝒳.
The space 𝒳_Y is isometrically isomorphic to the dual space [𝒳]' of ([𝒳], ·_q).
For any f∈𝒳_Y, we define F by
F([ζ]) = ⟨ f, ζ⟩ ∀ [ζ]∈ [𝒳].
If [ζ_1] = [ζ_2], we have ζ_1 - ζ_2 ∈ Y⊂Ker(f), which gives F([ζ_1]) = F([ζ_2]) and F is well defined. According to Lemma <ref>, F∈ [𝒳]' and
F_[𝒳]' = f.
On the other hand, for any F∈ [𝒳]', we can define f by
⟨ f, ζ⟩ = F([ζ]) ∀ ζ∈𝒳.
It is obvious that Y⊂Ker(f). Again by Lemma <ref>, we have f∈𝒳_Y and (<ref>) holds.
Define T: 𝒳_Y→ [𝒳]' by
T: f→ F,
where F is defined by (<ref>). By (<ref>), (<ref>) and (<ref>), T is an isometrical isomorphism. This completes the proof.
For any f∈𝒳_Y, let F be defined as in (<ref>). It holds [Ker(f)] = Ker(F).
For any [ζ]∈ [Ker(f)], we can assume ζ∈Ker(f). It gives
F([ζ]) = ⟨ f, ζ⟩ = 0,
i.e., [ζ]∈Ker(F), which yields [Ker(f)]⊂Ker(F).
On the other hand, for any [ζ]∈Ker(F), we have F([ζ]) = 0, which gives ⟨ f, ζ⟩ = F([ζ]) = 0.
This means ζ∈Ker(f), which further implies [ζ]∈ [Ker(f)] and Ker(F)⊂ [Ker(f)]. This completes the proof.
Let K be a convex set and ζ^*∈ K. Then f∈𝒳_Y∩𝒩(ζ^*, K) if and only if T(f)∈𝒩([ζ^*], [K]),
where T is defined by (<ref>) and
𝒩([ζ^*], [K]) = {F̃∈ [𝒳]': -F̃([ζ] - [ζ^*])≥ 0 ∀ [ζ]∈ [K]}.
This follows from the fact
-T(f)([ζ] - [ζ^*]) = - T(f)([ζ - ζ^*]) = -⟨ f, ζ - ζ^*⟩ ∀ ζ∈ K.
Suppose that Assumption <ref> holds. The proper Lagrange multiplier exists at u^* if and only if there exist Λ̅∈𝒩([ζ^*],[K]) and ζ̅_0∈R(S) such that
{ Λ̅([ζ̅_0]) > 0,
⟨λ^*, ζ̅_0 ⟩ > 0.
.
If the proper Lagrange multiplier exists, by Theorem <ref>, there exists λ̃∈𝒩(ζ^*, K) satisfying (<ref>) and (<ref>).
This gives λ̃∈𝒳_Y
and we can define Λ̅∈ [𝒳]' by λ̃ and (<ref>). Since λ̃∈𝒩(ζ^*, K), by Lemma <ref>, we have Λ̅∈𝒩([ζ^*], [K]). According to the consistency condition (<ref>) and the definition of Λ̅, it holds
Λ̅([ζ̅_0]) = ⟨λ̃, ζ̅_0 ⟩> 0.
For the other direction, assume that such Λ̅∈𝒩([ζ^*], [K]) exists. We can define λ̃ by Λ̅ and (<ref>). By Lemma <ref>, (<ref>) and (<ref>), we have
λ̃∈𝒳_Y, λ̃∈𝒩(ζ^*, K) ⟨λ̃, ζ̅_0⟩ >0.
Now by Theorem <ref>, we only need to prove that the compatibility condition (<ref>) holds.
Since λ̃∈𝒳_Y, we have
Ker(λ^*)∩R(S) = Y ⊂Ker(λ̃)∩R(S).
By (<ref>), [R(S)] is not a subspace of Ker(Λ̅), which implies
Ker(Λ̅)∩ [R(S)] = [0],
by ([R(S)]) ≤ 1. Note that Lemma <ref> gives
[Ker(λ̃)] = Ker(Λ̅).
Then we have
[Ker(λ̃)]∩ [R(S)] = Ker(Λ̅)∩ [R(S)] = [0].
Since
[Ker(λ̃)∩R(S)]⊂ [Ker(λ̃)]∩ [R(S)],
it follows
[Ker(λ̃)∩R(S)] = [0].
This gives
Ker(λ̃)∩R(S)⊂Ker(λ^*)∩R(S).
Together with (<ref>),
we have
Ker(λ̃)∩R(S) = Ker(λ^*)∩R(S).
This completes the proof.
For the uniqueness of the proper Lagrange multiplier, it follows from the proof of Theorem <ref> directly.
Suppose that Assumption <ref> holds. The proper Lagrange multiplier is unique if and only if there exists a unique (up to multiplication by a positive constant) Λ̅∈𝒩([ζ^*],[K]) satisfying (<ref>) for some ζ̅_0∈R(S).
Since the canonical map from 𝒳 to [X] is continuous, the open mapping theorem indicates that if K has an interior point, so does [K]. But the other direction is not true. This is useful in exploring the existence of the proper Lagrange multiplier.
The following examples are helpful in understanding the previous theoretical results.
We consider the optimization problem
min1/2[(x_1 -α)^2 + x_2^2] Sx ∈ K,
where α∈ℝ, x∈ℝ^2, S = [ 1 0; 0 0 ] and K⊂ℝ^2 is a closed convex set.
* Let K = K_1, where
K_1 = {(ζ_1,ζ_2)^T∈ℝ^2: ζ_1^2 + (ζ_2 -1)^2 ≤ 1}.
In this case K∩ R(S) = (0,0)^T and the feasible set is
M = {(0,x_2)^T∈ℝ^2: x_2∈ℝ}.
The global minimizer of this problem is x^* = (0,0)^T. The gradient of the objective function at this point is (-α,0)^T and the essential Lagrange multiplier is λ^* = α. Note that the solution of
[ -α; 0 ] + S^*λ = 0
is λ = (α, λ_2)^T, for any λ_2∈ℝ. Since the proper Lagrange multiplier must satisfy the above equation, we assume that λ̅ = (α, λ̅_2)^T for some λ̅_2∈ℝ. Now we consider the condition
- ⟨λ̅, ζ - ζ^*⟩≥ 0 ∀ ζ∈ K,
where ζ^* = Sx^* = (0,0)^T.
It is equivalent to
-αζ_1 - λ̅_2 ζ_2 ≥ 0 ∀ ζ = (ζ_1, ζ_2)^T∈ K.
If α = 0, we can choose λ̅_2 = 0.
If α≠ 0, there is no λ̅_2∈ℝ to satisfy the inequality. This means that unless α=0, the proper Lagrange multiplier does not exist for this problem at the global minimizer. Note that if α≠ 0, we have
𝒩(ζ^*,K):={(0, λ_2)^T∈ℝ^2: λ_2≤ 0}⊂Ker(S^*) :={(0, λ_2)^T∈ℝ^2: λ_2∈ℝ}.
Both the compatibility condition (<ref>) and the consistency condition (<ref>) are not satisfied.
* Let K = K_2, where
K_2 = {(ζ_1,ζ_2)^T∈ℝ^2: ζ_1^2 + (ζ_2 -1)^2 ≤ 1}∖{(ζ_1,ζ_2)^T∈ℝ^2: ζ_1 - ζ_2≤ 0}.
The feasible set M, the global minimizer and the essential Lagrange multiplier are the same as those in the case K = K_1. As before, for the proper Lagrange multiplier λ̅ = (α, λ̅_2)^T we have
-αζ_1 - λ̅_2 ζ_2 ≥ 0 ∀ ζ = (ζ_1, ζ_2)^T∈ K.
On the other hand, the normal cone 𝒩(ζ^*, K) is given by
𝒩(ζ^*, K) = {(λ_1, λ_2)^T∈ℝ^2: λ_2≤ 0, λ_1 + λ_2 ≥ 0}.
* If α<0, (α, λ̅_2)^T∉𝒩(ζ^*, K) for any λ̅_2∈ℝ. Therefore, the proper Lagrange multiplier does not exist. Note that the condition (<ref>) can not be satisfied, while the condition (<ref>) is always true for any 0≠λ∈𝒩(ζ^*, K).
* If α>0, λ̅ is a proper Lagrange multiplier for any λ̅_2∈ [-α, 0]. In this case, both the compatibility condition (<ref>) and the consistency condition (<ref>) can be fulfilled.
Figure 1 gives an illustration of the results above.
[scale = 0.5]
[scale = 1.3]
[color = black, fill=grey!20] (1.2,1.2) arc (0:360:1.2);
[->, blue] (-1.5,0)–(1.8,0) node[right]ζ_1;
[->] (0,-0.5)–(0,3.0) node[above]ζ_2;
at (0,-1.0) No λ̅ for α≠ 0;
[fill=black!80] (0,1.2) circle (0.04) node[right]K_1;
[fill=red!80] (0,0) circle (0.06);
at (-2.1,0) R(S);
[xshift = 9cm, scale = 1.3]
[color = black, fill=grey!20] (1.2,1.2) arc (0:270:1.2);
[color = black] (0,0)–(1.2,1.2);
[->, blue] (-1.5,0)–(1.8,0) node[right]ζ_1;
[->] (0,-0.5)–(0,3.0) node[above]ζ_2;
at (0,-1.0) No λ̅ for α<0;
at (-2.1,0) R(S);
[fill=black!80] (0,1.2) circle (0.04) node[right]K_2;
[fill=red!80] (0,0) circle (0.06);
[xshift = 18cm, scale = 1.3]
[color = black, fill=grey!20] (1.2,1.2) arc (0:270:1.2);
[color = black] (0,0)–(1.2,1.2);
[->, blue] (-1.5,0)–(1.8,0) node[right]ζ_1;
[->] (0,-0.5)–(0,3.0) node[above]ζ_2;
at (0.5,-1.0) λ̅ = (α, λ̅_2)^T for α≥ 0;
[fill=black!80] (0,1.2) circle (0.04) node[right]K_2;
[fill=red!80] (0,0) circle (0.06);
at (-2.1,0) R(S);
at (9.5,-3.5) Figure 1.;
We consider the optimization problem
min1/2[(x_1 -α)^2 + x_2^2] Sx ∈ K,
where α> 0, x∈ℝ^2, S = [ 1 0; 0 0 ] and K⊂ℝ^2 is a closed convex set. Let 0≤ 2r< α.
* Let K = K_3, where
K_3 = {(ζ_1,ζ_2)^T∈ℝ^2: (ζ_1 - r)^2 + ζ_2^2 ≤ r^2}.
The feasible set of this example is
M = {(x_1, x_2)^T∈ℝ^2: 0≤ x_1 ≤ 2r}
and the global minimizer of this problem is x^* = (2r, 0)^T. The gradient of the objective function at this point is (2r - α, 0)^T. The essential Lagrange multiplier is λ^* = α - 2r > 0 and ζ^* = Sx^* = (2r, 0)^T.
The proper Lagrange multiplier should be in the form λ̅ = (α - 2r, λ_2)^T for some λ_2∈ℝ. By the condition
- ⟨λ̅, ζ - ζ^*⟩≥ 0 ∀ ζ∈ K,
or equivalently
-(α - 2r)(ζ_1 -2r) - λ_2 ζ_2 ≥ 0 ∀ ζ = (ζ_1, ζ_2)^T∈ K,
we have
λ_2 = 0.
Hence the proper Lagrange multiplier for this example is λ̅ = (α - 2r, 0)^T and it is unique. Note that Robinson's condition (<ref>) holds in this case.
* Let K = K_4, where
K_4 = {(ζ_1,ζ_2)^T∈ℝ^2: (ζ_1 - r)^2 + ζ_2^2 ≤ r^2, ζ_2≥0}.
The global minimizer and the essential Lagrange multiplier are the same as in the previous case. In this case, the proper Lagrange multiplier λ̅ = (α - 2r, λ̅_2)^T for any λ̅_2≤ 0, which is not unique. Note that Robinson's condition (<ref>) is not true in this case.
Figure 2 gives an illustration of the results above.
[scale = 0.6]
[scale = 1.3]
[color = black, fill=grey!20] (1.2,1.2) arc (0:360:1.2);
[->, blue] (-1.5,1.2)–(1.8,1.2) node[right]ζ_1;
[->] (-1.2,-0.3)–(-1.2,3.0) node[above]ζ_2;
[color = red] (-1.2,1.2) – (1.2,1.2);
[fill=black!80] (0,1.2) circle (0.04);
[fill=red!80] (1.2,1.2) circle (0.06);
at (0.1,1.6) K_3;
at (-0.1,-1.0) λ̅ = (α - 2r,0)^T;
at (-2.1,1.2) R(S);
[xshift = 10cm, scale = 1.3]
[color = black, fill=grey!20] (1.2,1.2) arc (0:180:1.2);
[->, blue] (-1.5,1.2)–(1.8,1.2) node[right]ζ_1;
[->] (-1.2,-0.3)–(-1.2,3.0) node[above]ζ_2;
[color = red] (-1.2,1.2) – (1.2,1.2);
[fill=black!80] (0,1.2) circle (0.04);
[fill=red!80] (1.2,1.2) circle (0.06);
at (-2.1,1.2) R(S);
at (0.1,1.6) K_4;
at (0.3,-1.0) λ̅ = (α - 2r, λ̅_2)^T;
at (5.5,-3.0) Figure 2.;
We consider the problem (cf. <cit.>)
min_u∈ H_0^1(Ω)∫_Ω∇ u·∇ u dx u_L^2(Ω)^2 =1.
Let
G(u) = u_L^2(Ω)^2 𝒦 = {1}⊂ℝ.
For this problem, we have
T_w(M, u) = T(M, u) = L(𝒦, u)
for any u. Note that G'(u) is bounded from H_0^1(Ω) to ℝ and R(G'(u)) = ℝ. According to our theory, for a global minimizer u^*∈ H_0^1(Ω), the essential (or proper) Lagrange multiplier λ^*∈ℝ exists and the following KKT system holds
∫_Ω∇ u^*·∇ v dx - λ^*∫_Ω u^*v dx ∀ v∈ H_0^1(Ω) u^*_L^2(Ω) =1.
This is actually the weak form of the following eigenvalue problem
{
-Δ u^* = λ^* u^* Ω;
u^* = 0 ∂Ω;
u^*_L^2(Ω) = 1.
.
§.§ The convergence of multipliers of the classical ALM
The classical ALM for the model problem is given in (<ref>). We will give an essential characterization of the convergence of the multipliers generated by the algorithm.
Let {λ^k}_k=1^+∞ be the multipliers generated in the classical ALM (<ref>).
* If the essential Lagrange multiplier λ^* exists at u^*, then
lim_k→ + ∞⟨λ^k, Sv⟩ = ⟨λ^*, Sv⟩ ∀ v∈𝒰.
* If the restriction of {λ^k}_k=1^+∞ to R(S) weakly converges in R(S) to some element λ^*∈R(S), then λ^* is the essential Lagrange multiplier at u^*.
Note that {λ^k}_k=1^+∞ satisfies (<ref>). The first part follows from (<ref>) and Definition <ref> directly.
For the second part, if the restriction of {λ^k}_k=1^+∞ to R(S) weakly converges in R(S) to some element λ^*∈R(S), i.e.,
lim_k→ +∞⟨λ^k, Sv⟩ = ⟨λ^*, Sv⟩ ∀ v∈𝒰,
then we have
⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ^*, Sv⟩ = ⟨ D_uθ(u^*), v⟩_𝒰 + lim_k→ +∞⟨λ^k, Sv⟩ = 0 ∀ v∈𝒰
and
-⟨λ^*, Sv - Su^*⟩ = -lim_k→ +∞⟨λ^k, Sv - Su^*⟩≥ -_k→ +∞⟨λ^k, Sv - Su^*⟩≥ 0 ∀ Sv∈ K,
by (<ref>). Therefore, λ^* is the essential Lagrange multiplier by the boundedness of λ^* and we finish the proof.
The following corollary follows from Theorem <ref> and Theorem <ref> directly.
If R(S) is closed in 𝒳, then the restriction of {λ^k}_k=1^+∞ to R(S) always weakly converges in R(S) to the essential Lagrange multiplier λ^* at u^*.
Let us consider the optimization problem
min1/2x^2 {
x = 1;
2x = 2.
.
The global minimizer of this problem is x^* = 1 and the essential Lagrange multiplier at x^* is
λ^* = -1/5[ 1; 2 ].
All the proper Lagrange multipliers of this example define the set
Λ = {λ̅ = (λ_1, λ_2)^T∈ℝ^2: λ_1 + 2λ_2 + 1 = 0}.
The iterators of the classical ALM satisfy
{
x^k+1 - 1 = 1/1+5β(x^k - 1);
λ^k+1 + 1/5[ 1; 2 ] = J[λ^k + 1/5[ 1; 2 ]];
λ^k+1_1 + 2λ^k+1_2 +1 = 1/1+5β(λ^k + 2λ^k + 1),
.
where k = 1, 2, …,
J = 1/1+5β[ 1+4β -2β; -2β 1 + β ]
and β > 0.
Note that J is symmetric, positive definite and J = 1, which implies ρ(J) > 1. We have the following convergence results.
* {x^k}_k=1^+∞ converges to x^* = 1.
* {λ^k}_k=1^+∞ is unbounded.
* {λ^k}_k=1^+∞ converges to the essential Lagrange multiplier in R(S), i.e.,
λ^k_1 + 2λ^k_2 → -1 = -1/5(1×1 + 2×2) k→ + ∞.
* The distance between λ^k (k=1, 2, …) and Λ converges to 0.
§ REVISITING THE WEAK FORM ASYMPTOTIC KKT SYSTEM
In the previous arguments we notice that in the infinite-dimensional case, the essential Lagrange multiplier may not exist. Here we derive a sufficient condition to guarantee the existence of the essential Lagrange multiplier by the W-AKKT system (<ref>). We will also use the W-AKKT system (<ref>) to give a proof of the existence of the proper Lagrange multiplier under Robinson's condition (cf. <cit.>, <cit.> or (<ref>)). A sufficient condition, which includes Robinson's condition as a special case, to guarantee the existence of the proper Lagrange multiplier is also given. It is worth noting that all these results also give the convergence results of the multipliers generated by the classical ALM for the model problem.
According to the W-AKKT system (<ref>), we have
⟨ D_uθ(u^*), v⟩_𝒰 + lim_k→ +∞⟨λ^k, Sv⟩ = 0 ∀ v∈𝒰.
A linear functional λ̃ on R(S)⊂𝒳 can be defined as
λ̃(ζ) = lim_k→ +∞⟨λ^k, ζ⟩ ∀ ζ∈R(S).
The second condition leads to
lim_k→+∞⟨λ^k ,ζ⟩≤lim_k→+∞⟨λ^k , ζ^*⟩ ⟺ λ̃(ζ) ≤λ̃(ζ^*) ∀ ζ∈ K∩ R(S).
If λ̃ can be extended to a bounded linear functional in R(S)', then we have the essential Lagrange multiplier. Therefore, the existence of the essential Lagrange multiplier is equivalent to the boundedness of the linear functional λ̃ in R(S)'.
If the topology of 𝒳 is too weak to guarantee the boundedness of λ̃, we can set the problem on a small subspace of 𝒳, where a stronger topology can be equipped on the subspace. More precisely, we can set the problem on a subspace 𝒳̃ where R(S)⊆𝒳̃⊆𝒳. In this case, we can define a strong topology on 𝒳̃, which is stronger than that of 𝒳, and the stronger topology will make it easier to guarantee the boundedness of λ̃. Motivated by these arguments, we give the following results.
Let ·_* be a norm on R(S) such that (R(S), ·_*) is a Banach space which is continuously embedded into (𝒳, ·). There exists a unique essential Lagrange multiplier λ̃^*∈ (R(S), ·_*)' which is the dual space of (R(S), ·_*) such that the following KKT system holds, i.e.,
{ ⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ̃^*, Sv⟩_* = 0 ∀ v∈𝒰;
- ⟨λ̃^* ,ζ - Su^*⟩_*≥ 0 ∀ ζ∈K∩ R(S)^·_*,
.
where ⟨·,·⟩_* is the duality pair of (R(S), ·_*)' and (R(S), ·_*).
Since (R(S), ·_*) is continuously embedded into (𝒳, ·) and {λ^k}⊆𝒳, {λ^k}⊆ (R(S), ·_*)'. By the first condition in the W-AKKT system (<ref>), we know
sup{|⟨λ^k, ζ⟩_*|: k=1,…, +∞}< +∞ ∀ ζ∈ R(S).
According to the uniform boundedness principle (cf. <cit.>), we know that {λ^k}_k=1^+∞ is bounded in (R(S), ·_*)'. Note that the unit ball in (R(S), ·_*)' is weak-* compact. Again, by the first condition in the W-AKKT system (<ref>), we know that there exists λ̃^*∈ (R(S), ·_*)' such that (<ref>) holds. The uniqueness follows from (<ref>) directly.
The W-AKKT system (<ref>) can also be used to derive the existence of the proper Lagrange multiplier under Robinson's condition (cf. <cit.> and <cit.>), which is
𝒳 = S(𝒰) - 𝒦_K,ζ^*,
where
𝒦_K,ζ^* = {t(ζ - ζ^*): ∀ ζ∈ K ∀ t≥ 0}.
The following results cover the results under Robinson's condition and the proof below is based on the generalized open mapping theorem (cf. <cit.>, <cit.> or <cit.>).
Let (𝒳̃,·_*) be a Banach space which is continuously embedded into (𝒳,·). Assume that the following condition holds for (𝒳̃,·_*) at (u^*, ζ^*) = (u^*, Su^*):
𝒳̃ = S(𝒰) - 𝒦_K,ζ^*.
Then there exists a proper Lagrange multiplier λ̃∈ (𝒳̃,·_*)' such that
{ ⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ̃, Sv⟩_* = 0 ∀ v∈𝒰;
- ⟨λ̃ ,ζ - Su^*⟩_*≥ 0 ∀ ζ∈ K,
.
where ⟨·,·⟩_* is the duality pair of (X̃, ·_*)' and (X̃, ·_*).
Since (𝒳̃,·_*) is continuously embedded into (𝒳,·), we have {λ^k}⊆ (𝒳̃,·_*)'. By (<ref>) and the generalized open mapping theorem (cf. <cit.>, <cit.> or <cit.>), there exists r>0 such that
B_r,𝒳̃⊆ S(B_1,𝒰) - 𝒦_K,ζ^*∩ B_1,𝒳̃,
where B_ρ,V is the ball in V with center at the original and radius ρ>0, V = 𝒳̃ or 𝒰, ρ = r or 1 and B_ρ,V is its closure in V.
Let {x^k}⊆𝒳̃ be a sequence of unit vectors such that
⟨λ^k, x^k⟩_* ≥1/2λ^k_(X̃, ·_*)'.
Then
-rx^k = Sv^k - t_k(ζ^k -ζ^*),
where {v^k}⊆ B_1,𝒰, {t_k(ζ^k-ζ^*)}⊆𝒦_K,ζ^*∩ B_1,𝒳̃ and t_k≥ 0, ∀ k = 1,…, ∞.
Note that
⟨ D_uθ(u^*), v^k⟩_𝒰 + lim_k→ +∞⟨λ^k, Sv^k⟩_* = 0 ∀ k=1,…,∞
and
-_k→ +∞⟨λ^k, t_k(ζ^k - ζ^*)⟩_* ≥ 0 ∀ k=1,…,∞.
Hence, it holds at least for a subsequence that for any ϵ>0 there exists N_ϵ>0 such that for any k>N_ϵ,
- ⟨λ^k, Sv^k⟩_* + ⟨λ^k, t_k(ζ^k - ζ^*)⟩_* ≤ϵ + |⟨ D_uθ(u^*), v^k⟩_𝒰| ≤ϵ + D_uθ(u^*)
_𝒰< +∞ .
Therefore, for a fixed ϵ>0 and any k>N_ϵ, we have
r/2λ^k_(X̃, ·_*)' ≤⟨λ^k, rx^k⟩_*
= -⟨λ^k, Sv^k - t_k(ζ^k-ζ^*)⟩_*
< + ∞.
It follows that {λ^k}_k=1^+∞ is bounded in (𝒳̃,·_*)'. There exists a subsequence of {λ^k}_k=1^+∞ which weak-* converges to some element λ̃ in (𝒳̃,·_*)'. According to the W-AKKT (<ref>), (<ref>) is valid and we complete the proof.
Now we give a proof of the existence of the proper Lagrange multiplier under the condition (<ref>) based on the W-AKKT system (<ref>) and the uniform boundedness principle that has an elementary proof in <cit.>.
The W-AKKT system (<ref>) is equivalent to
⟨ D_uθ(u^*), v⟩_𝒰 + _k→ +∞⟨λ^k, Sv + t(ζ - Su^*)⟩≤ 0 ∀ v∈𝒰, ∀ ζ∈ K, t≥ 0.
It is obvious that the W-AKKT system (<ref>) implies (<ref>).
Now we prove that if (<ref>) holds, so does the W-AKKT system (<ref>).
Taking t=0, we have
⟨ D_uθ(u^*), v⟩_𝒰 + _k→ +∞⟨λ^k, Sv ⟩≤ 0 ∀ v∈𝒰.
If we change v to -v, we have
⟨ D_uθ(u^*), -v⟩_𝒰 - _k→ +∞⟨λ^k, Sv ⟩ = ⟨ D_uθ(u^*), -v⟩_𝒰 + _k→ +∞⟨λ^k, S(-v) ⟩≤ 0.
These two inequalities give
_k→ +∞⟨λ^k, Sv⟩ - _k→ +∞⟨λ^k, Sv ⟩≤ 0.
On the other hand, by the definitions of _k→ +∞ and _k→ +∞, we have
_k→ +∞⟨λ^k, Sv⟩≥_k→ +∞⟨λ^k, Sv ⟩.
Therefore,
_k→ +∞⟨λ^k, Sv⟩ = _k→ +∞⟨λ^k, Sv ⟩
and
⟨ D_uθ(u^*), v⟩_𝒰 + lim_k→ +∞⟨λ^k, Sv ⟩ = 0 ∀ v∈𝒰.
By taking v = 0 and t=1, we have
-_k→ +∞⟨λ^k, ζ - Su^*⟩≥ 0 ∀ ζ∈ K.
This completes the proof.
Let ζ^* = Su^* and
𝒞(S, ζ^*, K) = {Sv + t(ζ - ζ^*)∈𝒳: ∀ v∈𝒰, ∀ ζ∈ K t≥ 0}.
Note that 𝒞(S, ζ^*, K) is a convex cone in 𝒳.
Assume that there exists a norm ·_* such that (𝒞(S, ζ^*, K), ·_*) is a Banach space and it is continuously embedded into (𝒳, ·). Then there exists λ̃∈ (𝒞(S, ζ^*, K), ·_*)' such that
⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ̃, Sv + t(ζ - ζ^*)⟩_* ≤ 0 ∀ v∈𝒰, ∀ ζ∈ K t≥ 0,
or equivalently,
{ ⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ̃, Sv⟩_* = 0 ∀ v∈𝒰;
-⟨λ̃, ζ - Su^*⟩_* ≥ 0 ∀ ζ∈ K,
.
where ⟨·, ·⟩_* is the duality pair of (𝒞(S, ζ^*, K), ·_*)' and (𝒞(S, ζ^*, K), ·_*).
Let {λ^k}_k=1^+∞ be the same as that in the W-AKKT system (<ref>). Since (𝒞(S, ζ^*, K), ·_*) is a Banach space and it is continuously embedded into (𝒳, ·), {λ^k}⊂ (𝒞(S, ζ^*, K), ·_*)'.
Since 𝒞(S, ζ^*, K) is a linear space, for any v∈𝒰, ζ∈ K and t≥ 0, we have
-[Sv + t(ζ - ζ^*)]∈𝒞(S, ζ^*, K),
i.e., there exist w∈𝒰, ζ̅∈ K and s≥ 0 such that
-[Sv + t(ζ - ζ^*)] = Sw + s(ζ - ζ^*).
Therefore, by (<ref>), we have
⟨ D_uθ(u^*), v⟩_𝒰 + _k→ +∞⟨λ^k, Sv + t(ζ - ζ^*)⟩≤ 0
and
⟨ D_uθ(u^*), w⟩_𝒰 - _k→ +∞⟨λ^k, Sv + t(ζ - ζ^*)⟩
≤⟨ D_uθ(u^*), w⟩_𝒰 - _k→ +∞⟨λ^k, Sv + t(ζ - ζ^*)⟩
= ⟨ D_uθ(u^*), w⟩_𝒰 + _k→ +∞⟨λ^k, -[Sv + t(ζ - ζ^*)]⟩
= ⟨ D_uθ(u^*), w⟩_𝒰 + _k→ +∞⟨λ^k, Sw + s(ζ̅ - ζ^*)⟩
≤ 0,
which follows
⟨ D_uθ(u^*), w⟩_𝒰≤_k→ +∞⟨λ^k, Sv + t(ζ - ζ^*)⟩≤ -⟨ D_uθ(u^*), v⟩_𝒰.
Furthermore, we have
|_k→ +∞⟨λ^k, Sv + t(ζ - ζ^*)⟩| ≤D_uθ(u^*)_𝒰(v_𝒰 + w_𝒰) < + ∞ ∀ v∈𝒰, ∀ζ∈ K.
It follows that there exists a subsequence of {λ^k}_k=1^+∞ (which will still be denoted by {λ^k}_k=1^+∞), such that
|⟨λ^k, η⟩_*| = |⟨λ^k, η⟩|< +∞ ∀ η∈𝒞(S, ζ^*, K), ∀ k = 1, 2, ….
According to the uniform boundedness principle, {λ^k}_k=1^+∞ is bounded in (𝒞(S, ζ^*, K), ·_*)'. Then there exists a subsequence of {λ^k}_k=1^+∞ (which will still be denoted by {λ^k}_k=1^+∞) converges to λ̃∈ (𝒞(S, ζ^*, K), ·_*)' in the weak-* topology of (𝒞(S, ζ^*, K), ·_*)'. Therefore,
lim_k→ +∞⟨λ^k, η⟩ = ⟨λ̃, η⟩_* ∀ η∈𝒞(S, ζ^*, K).
By (<ref>), we have (<ref>) holds. By taking t = 0 and v = 0, we can get the equivalence of (<ref>) and (<ref>). This completes the proof.
Note that 𝒳 = 𝒞(S, ζ^*, K), which means Robinson’s condition (<ref>) holds, is a special case of the results in Theorem <ref>.
If there exists λ_0 such that
⟨λ_0, Sv + t(ζ - ζ^*)⟩≤_k→ +∞⟨λ^k, Sv + t(ζ - ζ^*)⟩ ∀ v∈𝒰, ∀ ζ∈ K t≥ 0,
then we have
⟨ D_uθ(u^*), v⟩_𝒰 + ⟨λ_0, Sv + t(ζ - Su^*)⟩≤ 0, ∀ v∈𝒰, ∀ ζ∈ K t≥ 0.
This implies λ_0 is a proper Lagrange multiplier.
Therefore, every weak accumulation point (if exists) of {λ^k}_k=1^+∞ is a proper Lagrange multiplier, and furthermore, the weak convergence of the multipliers generated by the classical ALM in 𝒳 implies the existence of a proper Lagrange multiplier. However, the existence of proper Lagrange Lagrange multipliers can not guarantee the convergence of the multipliers generated by the classical ALM in 𝒳, see e.g., Example <ref> in this paper.
This will be also useful for developing new constrained qualification conditions to guarantee the existence of the proper Lagrange multiplier.
§ OPTIMAL CONTROL PROBLEMS WITH POINTWISE CONSTRAINTS
In this section we apply our theory to optimal control problems with pointwise state or control constraints. We will discuss the existence of Lagrange multipliers for optimal control problems with pointwise state or control constraints.
§.§ Optimal control problems with pointwise state constraints
We consider the following optimal control problem with pointwise state constraints
min_u∈ L^2(Ω), y∈ Y_adJ(y,u)=1/2y-y_d_L^2(Ω)^2 + α/2
u_L^2(Ω)^2
subject to
{ -Δ y = u Ω,
y =0 ∂Ω,
.
where u∈ L^2(Ω) is the control variable, y_d∈ L^2(Ω) is the
desired state or observation, α>0 is the regularization parameter and Y_ad is the set of the state constraints. For simplicity, we assume that Ω⊂ℝ^2 is a convex domain in the following arguments. The extension to general cases is straightforward.
We have two different state constraint sets Y_ad,1 and Y_ad,2, where
Y_ad,1 = { y∈ C(Ω̅): a(x)≤ y(x)≤ b(x) Ω},
Y_ad,2 = { y∈ L^2(Ω): a(x)≤ y(x)≤ b(x) Ω}
and a(x), b(x)∈ L^∞(Ω).
We refer the case Y_ad = Y_ad,1 as the continuous setting and the case Y_ad = Y_ad,2 as the L^2 setting. In both cases we can prove the existence and uniqueness of the solution to the optimal control problem.
Let (u̅, y̅)∈ L^2(Ω) × Y_ad,2 be the solution of
min_u∈ L^2(Ω), y∈ Y_ad,2 J(y,u).
By the regularity results of elliptic partial differential equations (cf. <cit.>), we have
y̅∈ C(Ω̅),
which implies
y̅∈ Y_ad,1.
Since Y_ad,1⊂ Y_ad,2, we have
J(y̅,u̅) = min_u∈ L^2(Ω), y∈ Y_ad,2 J(y,u) ≤min_u∈ L^2(Ω), y∈ Y_ad,1 J(y,u)≤ J(y̅,u̅).
It gives
min_u∈ L^2(Ω), y∈ Y_ad,2 J(y,u) = min_u∈ L^2(Ω), y∈ Y_ad,1 J(y,u),
which yields that (u̅, y̅) is also the unique solution of the problem in the continuous setting.
In general, one can not expect the existence of a proper Lagrange multiplier in L^2(Ω), see e.g., Example 7.1 in <cit.>.
In the continuous setting, we can obtain a multiplier in the measure space if Slater's condition is satisfied (cf. <cit.>), which implies Robinson's condition (cf. <cit.>) in this case.
In the L^2 setting, we can not get the existence results of the multipliers from the classical optimization theory. Note that there are not interior points in Y_ad,2 and Slater's condition can not be imposed.
§.§ A general problem in the form of the model problem
Let us consider the following general optimal control problem with state and/or control constraints
min_(u,y)∈𝒰_ad×𝒴_adJ(y,u)=1/2y-y_d_ℋ^2 + α/2
u_𝒰^2
subject to
y = ℒu,
where ℋ, 𝒰 are two real Hilbert spaces, ℒ is a bounded operator from 𝒰 to 𝒲 (⊂ℋ) and 𝒰_ad, 𝒴_ad are two closed and convex sets in 𝒰 and ℋ, respectively.
Let
θ(u) = J(ℒu,u)
and set
ℒu = z, u = w, Su = [ ℒu; u ] ζ = [ z; w ].
We also denote
K = 𝒴_ad×𝒰_ad.
With the help of these notations, we can reformulate the general optimal control problem in the desired form
min_u∈𝒰θ(u) Su ∈ K,
where K⊂𝒳 = 𝒲×𝒰. Note that in this case θ(u) is strongly convex on 𝒰 and c_0 can be chosen as α.
If we take 𝒲 =ℋ = 𝒰 = L^2(Ω), ℒ = Δ^-1: L^2(Ω) ↦ H_0^1(Ω) ↪ L^2(Ω), 𝒴_ad = Y_ad,2 and 𝒰_ad = 𝒰, this is the problem with pointwise state constraints in the L^2 setting. In this case, S^* = (Δ^-1, I_d) where I_d is the identity operator on L^2(Ω). Since R(Δ^-1) is dense in L^2(Ω), by Theorem <ref>, in general we can not obtain an essential Lagrange multiplier (which is also a proper Lagrange multiplier) in L^2(Ω) for the state constraints in this case. This theoretically clarifies the existence results of Lagrange multipliers in L^2(Ω) for optimal control problems with pointwise state constraints.
Our theory also indicates that in the case with only pointwise control constraints, i.e., 𝒲 = ℋ = 𝒰 = L^2(Ω), ℒ = Δ^-1, 𝒴_ad = 𝒲 and 𝒰_ad = {u∈ L^2(Ω): u_a(x)≤ u(x) ≤ u_b(x) x∈Ω}, a proper Lagrange multiplier in L^2(Ω) always exists for the control constraints, since the identity operator related to the control variable is closed. The same results can be obtained by a direct construction (cf. <cit.>) or the results under Robinson's condition by the classical optimization theory (cf. <cit.>). Our theory can guarantee this naturally.
Let
W = { y∈ H_0^1(Ω): Δ y ∈ L^2(Ω)}.
W is a Hilbert space under the following norm (cf. <cit.>)
φ_W^2 = φ_H^1(Ω)^2 + Δφ_L^2(Ω)^2.
Note that W↪ C(Ω̅) and
(Δ)^-1 : L^2(Ω)→ W
is onto, i.e., R((Δ)^-1) = W.
If we take
Y_ad,3 = { y∈ W: a(x)≤ y(x)≤ b(x) Ω},
then we have the same solution (u̅,y̅) of the optimal control problem with pointwise state constraints as in the continuous and L^2 settings. In this case, by Theorem <ref> or Theorem <ref>, we have the existence of the essential Lagrange multiplier λ^* in W'.
§ CONCLUDING REMARKS
In this paper, we systematically developed a new decomposition framework to investigate Lagrange multipliers of the KKT system of constrained optimization problems and variational inequalities in Hilbert spaces. Our new framework is totally different from existing frameworks based on separation theorems. We derived the weak form asymptotic KKT system and introduced the essential Lagrange multiplier. The basic theory of the essential Lagrange multiplier has been established in this paper. The existence theory of the essential Lagrange multiplier shows essential differences in finite and infinite-dimensional cases. Based on it, we also gave necessary and sufficient conditions for the existence and uniqueness of the proper Lagrange multiplier. The results theoretically confirm the necessity of using the asymptotic or approximate KKT system in the infinite-dimensional case as well. We proved the convergence of the classical augmented Lagrangian method without using the information of Lagrange multipliers of the problem, and essentially characterized the convergence properties of the multipliers generated by the classical augmented Lagrangian method. Some sufficient conditions to guarantee the existence of the essential Lagrange multiplier and the proper Lagrange multiplier were also derived.
§ ACKNOWLEDGEMENT
§ THE PROOF OF THEOREM <REF>
We first prove that if a feasible point u^* satisfies the W-AKKT system (<ref>), then u^* is the global minimizer of the model problem (<ref>). According to the convex optimization theory (cf. <cit.>, <cit.> or <cit.>), it suffices to show that
⟨ D_uθ(u^*), v - u^*⟩_𝒰≥ 0 ∀ Sv∈ K.
Since u^* satisfies the W-AKKT system (<ref>), there exists {λ^k}⊂𝒳 such that
⟨ D_uθ(u^*), v - u^*⟩_𝒰 + lim_k→+∞⟨ S^*λ^k, v - u^*⟩ = 0 ∀ v∈𝒰
and
-_k→+∞⟨λ^k, ζ - Su^*⟩≥ 0 ∀ ζ∈ K.
Therefore,
-_k→+∞⟨λ^k, Sv - Su^*⟩≥ 0 ∀ Sv ∈ K,
i.e.,
-_k→+∞⟨ S^*λ^k, v - u^*⟩_𝒰≥ 0 ∀ Sv ∈ K.
It follows
⟨ D_uθ(u^*), v - u^*⟩_𝒰 = - lim_k→+∞⟨ S^*λ^k, v - u^*⟩_𝒰
≥ -_k→+∞⟨ S^*λ^k, v - u^*⟩_𝒰≥ 0 ∀ Sv ∈ K,
which implies that u^* is the global minimizer of the model problem (<ref>).
For the other direction, we will use the classical ALM for the model problem as an optimization procedure regularization to prove it. The algorithm will be given and the convergence analysis will be carried out without using any assumptions on Lagrange multipliers of the model problem. The convergence analysis borrows some ideas of that in <cit.> and <cit.> for ADMM.
A comprehensive discussion of the classical ALM in Banach spaces based on a different approach can be found in <cit.>, and a further application of these results to develop new constraint qualification conditions in Banach spaces can be found in <cit.>.
We first rewrite the model problem (<ref>) into the following equivalent form
min_u∈𝒰, ζ∈𝒳θ(u) + I_K(ζ) Su = ζ,
where I_K(·) is the indicator function of K, which is defined by
I_K(ζ) = { 0, ζ∈ K,
+∞, ζ∉K.
.
Note that the global minimizer of the problem (<ref>) is given by (u^*, ζ^*), where u^* is the global minimizer of the model problem (<ref>) and ζ^* = Su^*.
§.§ The classical augmented Lagrangian method
The augmented Lagrangian functional L_β(u,ζ;λ): (𝒰×𝒳)×𝒳→ℝ∪{+∞} of the model problem (<ref>) based on (<ref>) is given by
L_β(u,ζ;λ) = θ(u) + I_K(ζ) + ⟨λ, Su - ζ⟩ + β/2Su - ζ^2,
where β>0.
The classical augmented Lagrangian method for the model problem (<ref>) based on this augmented Lagrangian functional is given by
{
(u^k+1, ζ^k+1) = min_(u,ζ)∈𝒰×𝒳L_β(u, ζ;λ^k),
λ^k+1 = λ^k + β(Su^k+1 - ζ^k+1),
.
where λ^1∈𝒳 is given.
§.§ Convergence analysis
In this part we will give an elementary proof of the convergence of the algorithm. The proof is composed of several steps.
We first give a characterization of the iterators {(u^k, ζ^k, λ^k)}_k=1^+∞ by the first order optimality system of the subproblem in the first step.
Since the subproblem in the first step is a convex problem, solving the subproblem is equivalent to solve
{ D_uθ(u^k+1) + S^*[λ^k + β(Su^k+1 - ζ^k+1)] = 0;
I_K(ζ) - I_K(ζ^k+1) - ⟨λ^k + β(Su^k+1 - ζ^k+1) ,ζ - ζ^k+1⟩≥ 0 ∀ ζ∈𝒳.
.
Hence, the iterators of the classical ALM satisfy
{ D_uθ(u^k+1) + S^*λ^k+1 = 0;
I_K(ζ) - I_K(ζ^k+1) - ⟨λ^k+1 ,ζ - ζ^k+1⟩≥ 0 ∀ ζ∈𝒳;
β r^k+1 = λ^k+1 - λ^k,
.
where r^k+1 = Su^k+1 - ζ^k+1. According to (<ref>), it holds
λ^k∈∂ I_K(ζ^k) ∀ k=2, 3, ….
Without loss of generality, we also assume that (<ref>) holds for k=1.
§.§.§ Convergence of {(u^k, ζ^k)}_k=1^+∞
We first prove that {r^k}_k=1^+∞ is Fejér monotone.
It follows from (<ref>) that
β S^*r^k+1
= S^*( λ^k+1 - λ^k)
= S^* λ^k+1 - S^* λ^k
= D_uθ(u^k) - D_uθ(u^k+1)
and
β⟨ r^k+1, ζ^k+1 - ζ^k⟩ = ⟨λ^k+1 - λ^k, ζ^k+1 - ζ^k ⟩
= I_K(ζ^k) - I_K(ζ^k+1) - ⟨λ^k+1 ,ζ^k - ζ^k+1⟩ (≥ 0)
+ I_K(ζ^k+1) - I_K(ζ^k) - ⟨λ^k ,ζ^k+1 - ζ^k⟩ (≥ 0)
≥ 0,
which yield
β⟨ r^k+1 - r^k, r^k+1⟩ = β⟨ (Su^k+1 - ζ^k+1) - (Su^k - ζ^k), r^k+1⟩
= β⟨ S(u^k+1 - u^k), r^k+1⟩ - β⟨ζ^k+1 - ζ^k, r^k+1⟩
= ⟨ u^k+1 - u^k, β S^*r^k+1⟩_𝒰 - β⟨ r^k+1, ζ^k+1 - ζ^k⟩
≤ -⟨ u^k+1 - u^k, D_uθ(u^k+1) - D_uθ(u^k) ⟩_𝒰
≤ - c_0u^k+1 - u^k^2_𝒰,
where we used (<ref>) in the last inequality. Furthermore, by applying the identity
β⟨ r^k+1 - r^k, r^k+1⟩ = β/2(r^k+1^2 -r^k^2 + r^k+1 - r^k^2),
we have
β/2(r^k+1^2 -r^k^2 + r^k+1 - r^k^2) ≤ - c_0u^k+1 - u^k^2_𝒰
or equivalently
r^k+1^2 ≤r^k^2 - r^k+1 - r^k^2 - 2c_0β^-1u^k+1 - u^k^2_𝒰≤r^k^2,
which shows that {r^k}_k=1^+∞ is Fejér monotone.
Secondly, we prove that
∑_k= 1^+∞r^k^2< +∞.
We denote by η = (u,ζ)∈𝒰×𝒳 and recall the Bregman distance induced by the convex functional θ(u) + I_K(ζ) at (D_uθ(u), λ) with λ∈∂ I_K(ζ), which is given by
𝒟_η(η̂,η; λ): = θ(û) - θ(u) - ⟨ D_uθ(u), û - u⟩_𝒰 + I_K(ζ̂) - I_K(ζ) - ⟨λ, ζ̂ - ζ⟩≥ 0,
for any η̂∈𝒰×𝒳.
By the assumption (<ref>) of θ(u), we have
𝒟_η(η̂,η; λ)
≥θ(û) - θ(u) - ⟨ D_uθ(u), û - u⟩_𝒰≥c_0/2û - u_𝒰^2.
Let η̂ = (û, ζ̂) satisfy ζ̂ = Sû∈ K. We will always assume this in the following analysis. It follows from (<ref>), (<ref>) and (<ref>) that
𝒟_η^k(η̂,η^k; λ^k) - 𝒟_η^n(η̂,η^n; λ^n) + 𝒟_η^n(η^k,η^n; λ^n)
= ⟨ D_uθ(u^n) - D_uθ(u^k), û - u^k⟩_𝒰 + ⟨λ^n - λ^k, ζ̂ - ζ^k⟩
= ⟨λ^n - λ^k, Su^k - Sû⟩+ ⟨λ^n - λ^k, Sû - ζ^k⟩
= ⟨λ^n - λ^k, Su^k - ζ^k⟩
= -∑_i = n^k-1⟨λ^i+1 - λ^i, Su^k - ζ^k⟩
= -β∑_i = n^k-1⟨ r^i+1, r^k⟩,
for any k>n.
This gives
𝒟_η^k(η^k+1,η^k; λ^k) + βr^k+1^2 = 𝒟_η^k(η̂,η^k; λ^k) - 𝒟_η^k+1(η̂,η^k+1; λ^k+1) ∀ k = 1, 2, ….
Summing over k from n to m (> n) on both sides, we have
∑_k=n^m( 𝒟_η^k(η^k+1,η^k; λ^k) + βr^k+1^2) = 𝒟_η^n(η̂,η^n; λ^n) - 𝒟_η^m(η̂,η^m; λ^m).
Then by taking n=1, using (<ref>) and letting m→ +∞, we have
∑_k=1^+∞( 𝒟_η^k(η^k+1,η^k; λ^k) + βr^k+1^2) ≤𝒟_η^1(η̂,η^1; λ^1)<+∞,
which yields
∑_k=1^+∞r^k^2<∞
and
{𝒟_η^k(η̂,η^k; λ^k)}_k=1^+∞ ,
by (<ref>).
Moreover, we have
Su^k - ζ^k = r^k → 0 k→ + ∞.
Now, we prove that {u^k}_k=1^+∞ and {ζ^k}_k=1^+∞ are Cauchy sequences.
For any k>n, it follows from (<ref>) that
|β∑_i = n^k-1⟨ r^i+1, r^k⟩|
≤β/2∑_i = n^k-1(r^i+1^2 + r^k^2)
≤β∑_i = n^kr^i^2.
For any k>n, by (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we have
c_0/2u^k - u^n_𝒰^2
≤𝒟_η^n(η^k,η^n; λ^n)
= -β∑_i = n^k-1⟨ r^i+1, r^k⟩ - [𝒟_η^k(η̂,η^k; λ^k) - 𝒟_η^n(η̂,η^n; λ^n)]
≤β∑_i = n^kr^i^2 + |𝒟_η^k(η̂,η^k; λ^k) - 𝒟_η^n(η̂,η^n; λ^n)| → 0 n, k→ +∞.
Therefore, {u^k}_k=1^+∞ is a Cauchy sequence and {ζ^k}_k=1^+∞ is a Cauchy sequence by (<ref>).
Let u̅∈𝒰 and ζ̅∈𝒳 be the limits of {u^k}_k=1^+∞ and {ζ^k}_k=1^+∞ respectively, i.e.,
lim_k→ + ∞u^k = u̅ lim_k→ + ∞ζ^k = ζ̅.
Since {ζ^k}⊂ K and K is closed, we have ζ̅∈ K. It follows from (<ref>) that
Su̅ = ζ̅.
§.§.§ Global convergence of the algorithm
Now, we prove that (u̅, ζ̅) is the global minimizer of the problem (<ref>). Since
θ(u) + I_K(ζ)
is lower semicontinuous, we have
θ(u̅) + I_K(ζ̅)≤_k→ +∞[θ(u^k) + I_K(ζ^k)].
Note that λ^k∈∂ I_K(ζ^k). The convexity of the objective functional gives
θ(u̅) + I_K(ζ̅) - [θ(u^k) + I_K(ζ^k)] - [ ⟨λ^k, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^k), u̅ - u^k ⟩_𝒰]≥ 0,
i.e.,
θ(u^k) + I_K(ζ^k) ≤θ(u̅) + I_K(ζ̅) - [ ⟨λ^k, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^k), u̅ - u^k ⟩_𝒰].
For any fixed n and k>n, it follows from (<ref>) and (<ref>) with η̂ = (u̅, ζ̅) that
|⟨λ^k, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^k), u̅ - u^k ⟩_𝒰|
= |⟨λ^k - λ^n, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^k) - D_uθ(u^n), u̅ - u^k ⟩_𝒰
+ ⟨λ^n, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^n), u̅ - u^k ⟩_𝒰|
= | β∑_i = n^k-1⟨ r^i+1, r^k⟩ + [⟨λ^n, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^n), u̅ - u^k ⟩_𝒰]|
≤β∑_i = n^kr^i^2 + |⟨λ^n, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^n), u̅ - u^k ⟩_𝒰|.
Hence, by (<ref>), (<ref>), (<ref>) and (<ref>), we get
_k→ +∞[θ(u^k) + I_K(ζ^k)]
≤θ(u̅) + I_K(ζ̅) + _k→ +∞{β∑_i = n^kr^i^2 + |⟨λ^n, ζ̅ - ζ^k⟩ + ⟨ D_uθ(u^n), u̅ - u^k ⟩_𝒰| }
= θ(u̅) + I_K(ζ̅) + β∑_i = n^+∞r^i^2 + |⟨λ^n, ζ̅ - ζ̅⟩ + ⟨ D_uθ(u^n), u̅ - u̅⟩_𝒰|
= θ(u̅) + I_K(ζ̅) + β∑_i = n^+∞r^i^2.
If we further take n→ + ∞ and use (<ref>), we obtain
lim_k→ +∞sup[θ(u^k) + I_K(ζ^k)] ≤θ(u̅) + I_K(ζ̅).
This together with (<ref>) yields
θ(u̅) + I_K(ζ̅) = lim_k→ +∞[θ(u^k) + I_K(ζ^k)].
For any (û,ζ̂) satisfing Sû = ζ̂, we will prove that
θ(u̅) + I_K(ζ̅)≤θ(û) + I_K(ζ̂).
The proof is similar to that of (<ref>). Since
θ(u^k) + I_K(ζ^k) ≤θ(û) + I_K(ζ̂) - [ ⟨λ^k, ζ̂ - ζ^k⟩ + ⟨ D_uθ(u^k), û - u^k ⟩_𝒰]
and
|⟨λ^k, ζ̂ - ζ^k⟩ + ⟨ D_uθ(u^k), û - u^k ⟩_𝒰|
= |β∑_i = n^k-1⟨ r^i+1, r^k⟩| + |⟨λ^n, ζ̂ - ζ^k⟩ + ⟨ D_uθ(u^n), û - u^k ⟩_𝒰|
≤β∑_i = n^kr^i^2 + |⟨λ^n, r^k⟩|
where n is fixed and k>n, we have
θ(u^k) + I_K(ζ^k)
≤θ(û) + I_K(ζ̂) - [ ⟨λ^k, ζ̂ - ζ^k⟩ + ⟨ D_uθ(u^k), û - u^k ⟩_𝒰]
≤θ(û) + I_K(ζ̂) + β∑_i = n^kr^i^2 + |⟨λ^n, r^k⟩|.
By (<ref>) and (<ref>), we get
θ(u̅) + I_K(ζ̅) = lim_k→+∞θ(u^k) + I_K(ζ^k) ≤θ(û) + I_K(ζ̂) + β∑_i = n^+∞r^i^2.
By taking n→ + ∞ and using (<ref>) again, it leads to
θ(u̅) + I_K(ζ̅) ≤θ(û) + I_K(ζ̂),
which implies (u̅,ζ̅) is the global minimizer of the model problem.
§.§ Deriving the weak form asymptotic KKT system
Now we derive the weak form asymptotic KKT system (<ref>). This can be achieved by exploring the limiting case of (<ref>).
We have proved that {u^k}_k=1^+∞ strongly converges to the global minimizer u^* of the model problem (<ref>). Since θ(u) is continuously differentiable, it gives
lim_k→ +∞ D_uθ(u^k) = D_uθ(u^*)
and
⟨ D_uθ(u^*), v⟩_𝒰 + lim_k→ +∞⟨λ^k, Sv⟩ = 0 ∀ v∈𝒰,
by (<ref>).
In order to get (<ref>), it suffices to show that
-_k→ + ∞⟨λ^k, ζ - Su^*⟩≥ 0 ∀ ζ∈ K.
Denote by ζ^* = Su^* ∈ K. According to (<ref>), for any ζ∈ K, we have
-⟨λ^k+1 ,ζ - ζ^k+1⟩≥ 0
and
- ⟨λ^k+1 ,ζ - ζ^*⟩
= - ⟨λ^k+1 ,ζ - ζ^k+1⟩ - ⟨λ^k+1 ,ζ^k+1 - ζ^*⟩
≥⟨λ^k+1, ζ^* - ζ^k+1⟩
= ⟨λ^k+1, ζ^* - ζ^k+1⟩ + ⟨ D_uθ(u^k+1), u^* - u^k+1⟩_𝒰 - ⟨ D_uθ(u^k+1), u^* - u^k+1⟩_𝒰.
For any fixed n and k + 1>n, applying (<ref>) to (u^*, ζ^*), we have
|⟨λ^k+1, ζ^* - ζ^k+1⟩ + ⟨ D_uθ(u^k+1), u^* - u^k+1⟩_𝒰|
≤β∑_i = n^k+1r^i^2 + |⟨λ^n, ζ^* - ζ^k+1⟩ + ⟨ D_uθ(u^n), u^* - u^k+1⟩_𝒰|.
Following the same arguments as before (see (<ref>)-(<ref>)), we get
|⟨λ^k+1, ζ^* - ζ^k+1⟩ + ⟨ D_uθ(u^k+1), u^* - u^k+1⟩_𝒰| → 0 k→ +∞.
On the other hand, the strong convergence of {u^k}_k=1^+∞ and the boundedness of D_uθ(u) around u^* give
lim_k→ +∞⟨ D_uθ(u^k+1), u^* - u^k+1⟩_𝒰 = 0.
Together with (<ref>), (<ref>) and (<ref>), we arrive at
-_k→ + ∞⟨λ^k, ζ - ζ^*⟩ = _k→ + ∞[-⟨λ^k, ζ - ζ^*⟩] ≥ 0 ∀ ζ∈ K.
Therefore, by (<ref>), (<ref>) and ζ^* = Su^*, we have the weak form asymptotic KKT system (<ref>) at u^*.
This completes the proof of Theorem <ref>.
amsplain
|
http://arxiv.org/abs/2306.04050v2
|
20230606224200
|
LLMZip: Lossless Text Compression using Large Language Models
|
[
"Chandra Shekhara Kaushik Valmeekam",
"Krishna Narayanan",
"Dileep Kalathil",
"Jean-Francois Chamberland",
"Srinivas Shakkottai"
] |
cs.IT
|
[
"cs.IT",
"cs.CL",
"cs.LG",
"math.IT"
] |
-@thistlm
arrows,shapes,chains,matrix,positioning,scopes,patterns,calc,
decorations.markings,
decorations.pathmorphing,
compat=1.3
shapes
theoremTheorem
lemma[theorem]Lemma
proposition[theorem]Proposition
definition[theorem]Definition
example[theorem]Example
remark[theorem]Remark
corollary[theorem]Corollary
assumption[theorem]Assumption
#1#2#1#22mu#1#2
M_p
M_c
N_p
N_c
P_p
P_c
B_p
B_c
p_cs
d_min
d_max
w_min
w_max
Pr
𝗁_FBL
OMScmsymn
=0.05in
shortlist
∙
LLMZip: Lossless Text Compression using Large Language Models
Chandra Shekhara Kaushik Valmeekam,
Krishna Narayanan,
Dileep Kalathil,
Jean-Francois Chamberland,
Srinivas Shakkottai
Department of Electrical and Computer Engineering
Texas A&M University
Email:{vcskaushik9,krn,dileep.kalathil,chmbrlnd,sshakkot}@tamu.edu
July 31, 2023
================================================================================================================================================================================================================================================================================
We provide new estimates of an asymptotic upper bound on the entropy of English using the large language model LLaMA-7B as a predictor for the next token given a window of past tokens.
This estimate is significantly smaller than currently available estimates in <cit.>, <cit.>.
A natural byproduct is an algorithm for lossless compression of English text which combines the prediction from the large language model with a lossless compression scheme.
Preliminary results from limited experiments suggest that our scheme outperforms state-of-the-art text compression schemes such as BSC, ZPAQ, and paq8h.
§ INTRODUCTION
There are close connections between learning, prediction, and compression. The success of ChatGPT has captured the fascination of general public and brought the connection between learning and prediction to the fore. The main advance brought about by large language models such as LLaMA and GPT-4 is that they excel at predicting the next word (token) in a paragraph based on knowing the past several words (tokens).
The connection between prediction and compression was explored as early as 1951 by Shannon in order to estimate the entropy of the English language <cit.>. The idea that a good predictor for the ith value in a time series based on the past values can be effectively converted to a good compression algorithm has played a prominent role in information theory.
Many algorithms for speech, image, and video compression exploit this idea either explicitly or implicitly.
Within the context of lossless compression of English text, the idea of combining a language model with arithmetic coding has emerged
as a very effective paradigm <cit.>.
The performance of such a compression scheme depends substantially on the efficacy of the predictor and
every time there is a major advance in the prediction capability, it behooves us to study its effect on the compression performance.
Indeed, in 2018, the authors of <cit.> used recurrent neural networks (RNN) as the predictor and reported improved results for certain kinds of sources.
Their scheme still did not outperform state-of-the-art algorithms such as BSC and ZPAQ for text compression.
It is therefore natural at this time to study whether we can obtain better compression results and sharper estimates of the entropy of the English language using recent large language models such as LLaMA-7B <cit.>. This is the main goal of this paper.
We show that when the LLaMA-7B large language model is used as the predictor,
the asymptotic upper bound on the entropy is 0.709 bits/character
when estimated using a 1MB section of the text8 dataset.
This is smaller than earlier estimates provided in <cit.> and <cit.>.
The estimate of the upper bound increases to 0.85 bits/character for a 100 KB section of the text from <cit.>, which is still lower than the estimates in <cit.>.
When LLaMA-7B is combined with an Arithmetic coder for compression, we obtain a compression ratio of 0.7101 bits/character on a 1MB section of the text8 dataset and a compression ratio of 0.8426 bits/character on a 100KB section of a text from <cit.>, which are significantly better than the compression ratio obtained using BSC, ZPAQ and pq8h on the full 100MB of the text8 dataset.
§ INTUITIVE EXPLANATION OF THE MAIN IDEA
We will use the following example to describe the main idea, which is nearly identical to that proposed by Shannon in <cit.> for estimating the entropy of English. The main difference is in the use of tokens which represent groups of letters of variable length and in the use of a large language model instead of a human to predict the next token.
Consider a part of the sentence that reads as
My first attempt at writing a book
Our goal is to convert this sentence into a sequence of bits with the least possible length such that the original sequence can be reconstructed from the sequence of bits.
This sentence can first be split into a sequence of words (tokens)
'My', 'first', 'attempt', 'at', 'writing', 'a', 'book'
A language model with memory M (for example, say M=4) predicts the next word in the sentence based on observing the past M words. Specifically, it produces a rank-ordered list of choices for the next word and their probabilities. As shown in Figure <ref>, at epoch 5, the model accepts the first 4 words as input and predicts that the next word in the sentence could be words such as 'reading', 'writing', 'driving', 'cooking' etc. The main idea is to compute the rank of the actual word in our sentence ('writing') in this list and call it R_5.
We will assume that the ranks start at 0 i.e., the most likely word has rank 0, the second most likely word has rank 1, and so on.
In this example, the rank for 'writing' is R_5=1.
Then, we move forward by one word in the sentence, and at epoch 6, we try to predict the 6th word based on words 2 through 5 as shown in Figure <ref>. In this example, given words 2 through 5, the most likely 6th word would indeed be the same word in the sentence that we wish to encode, 'a', and hence, the rank R_6 would be 0.
If the language model is good, the word that we wish to encode would often appear at the top of the list and hence, the rank would be 0. Thus, if we look at the sequence of ranks, it is likely to have many 0s with decreasing probabilities for the rank being 1,2,…. In this example, it is foreseeable that the ranks will be
1,0,0,…
A sequence with many `0's is typically compressible since it has structured patterns. Thus, the key idea is to compress the ranks using a standard lossless compression algorithm such as zip, arithmetic coding, or Huffman coding which converts the ranks to bits. This is shown in Fig. <ref>.
When we wish to reconstruct the sequence, we first decompress and unzip the bits to get the ranks, use the same language model one epoch at a time to produce a rank ordered list of possibilities for the next word, and pick the word in the list at rank R_i during the ith epoch.
We use that as input for determining the next word and so on.
Note that this requires that the same LLM is used at both the encoder and the decoder.
The idea of encoding ranks was discussed to build intuition, but better compression can be achieved by directly using the probabilities produced by the LLM along with arithmetic coding as discussed in Section <ref>.
§ COMPRESSION USING LLMS
Let denote a sentence from the English language composed of N_c letters, where each letter is assumed to be from the alphabet 𝒮.
We assume that we have a dictionary 𝒳=[1,D] of D tokens.
We first parse into a sequence of N_T tokens denoted by = x_1, x_2, …, x_i-1, x_i, x_i+1, … x_N_T, where x_i ∈𝒳.
There is a one-to-one mapping between and and hence, compressing s is the same as compressing .
x_i's can be thought of as realizations of the random variable denoted by the upper case letter X_i.
A language model with memory M is a predictor that operates as follows. At epoch i, it accepts tokens x_i-M,x_i-M+1,…,x_i-1 and produces a probability mass function for the next token in the sequence conditioned on the past M tokens given by q_i(x_i):= (X_i = x_i | x_i-1,x_i-2, …, x_i-M), ∀ x_i ∈𝒳. The PMF vector _i:=[q_i(1), q_i(2), …, q_i(D)]^𝖳 is sorted in descending order and let the sorted PMF vector be denoted by _i. Let γ_i:𝒳→𝒳 be a permutation on the integers from 1 to D such that
q̃_i(γ_i(j)) = q_i(j), ∀ j ∈𝒳.
That is, γ_i(j) is the rank of the token j at epoch i.
We define the rank of the input sequence at epoch i as the rank of the token x_i at epoch i, r_i:= γ_i(x_i).
The sequence {r_i}_i=1^N_T is compressed by a lossless compression algorithm (such as zlib) to produce N_b bits which are the final bit representation of the source.
A schematic of this scheme is shown in Fig. <ref>.
In general, the lossless compression algorithm may use the sequence of PMF vectors _i's in addition to the sequence of ranks.
The main metric of interest is the compression ratio ρ defined as
ρ:=N_b/N_cbits/character.
§.§ Entropy bounds
Let 𝐒∈𝒮^∞ be a random process that represents language input.
The nth character in the sequence is denoted by S_n, whereas the string of characters from the beginning to the nth character is expressed as 𝐒_n.
The tokenizer parses the input string and maps it to a sequence of tokens 𝐗 = X_1, X_2, … using a variable-length mapping.
In this sequence, X_i is the ith token.
The number of characters employed to generate X_i depends on the realization of the random process and, as such, we introduce random variable B_i to identify the number of characters contained in the ith token.
Motivated by practical considerations, we only admit tokenizers for which B_i ≥ 1 and B_i is uniformly bounded, with B_i < B < ∞; these are characteristics of commonly used tokenizers.
An immediate consequence of this framework is that, as the number of tokens grows unbounded N_T →∞, the number of characters must also approach infinity N_c →∞.
Formally, consider the tokenizer function T: 𝒮^ℕ→𝒳^ℕ operating on infinite symbol sequences; that is, T (𝐬) = 𝐱 where 𝐬 is an infinite sequence in 𝒮^∞.
For natural number, i ∈ℕ, define m_i : 𝒮^ℕ→ℕ to be the (time) index during which the tokenizer working sequentially on an input sequence 𝐬 outputs its ith token.
Specifically, suppose 𝐬 is given, then
m_i (𝐬) = min_n{length( T (𝐬_n) ) ≥ i } .
We note that, by construction, lim_n →∞length( T (𝐬_n) ) = ∞ and, as such, m_i(·) is well-defined.
It may be pertinent to stress that the tokenizer function applied to truncated sequences is not necessarily injective because multiple finite input series can map to the same output.
This phenomenon is a consequence of the fact that, at any point in time, a tokenizer working sequentially may be waiting for an additional symbol before it can unambiguously select the next output token, i.e., there may be instances where T(𝐬_n) = T(𝐬_n+1).
However, if we restrict the input series to input indices when a new token is produced, then the restricted mapping becomes injective.
That is, suppose T(𝐬) = 𝐱, then the only (finite) series of input symbols in the restricted set for which T (𝐲_n) = 𝐱_i is 𝐬_m_i (𝐬).
Given a fixed sequence 𝐬, we can express the number of characters contained in a token as
b_i = m_i (𝐬) - m_i-1 (𝐬)
with initial condition m_-1 = 0.
Consequently, the number of characters embedded in the first N_T tokens for a random input becomes N_c = ∑_i=1^N_T B_i.
Having established these properties, we turn to the relation between H(𝐒) and H(𝐗).
We make the assumption that
{S_k}_k=1^∞,
{B_i}_i=1^∞, and {X_i}_i=1^∞ are stationary and ergodic processes.
We know from the Shannon-McMillan-Breiman Theorem
<cit.> that
- 1/nlog_2 p_𝐒_n(S_1, …, S_n)
= - 1/nlog_2 p_𝐒_n(𝐒_n)
→ H(𝐒) almost surely .
Let Ω_𝐒 be the collection of ω∈Ω for which this limit holds.
In an analogous manner, the Shannon-McMillan-Breiman theorem implies
- 1/ilog_2 p_𝐗_i(X_1, …, X_i)
= - 1/ilog_2 p_𝐗_i(𝐗_i)
→ H(𝐗) almost surely .
Define Ω_𝐗 as the collection of ω∈Ω for which this limit holds.
Finally, by construction, we have
lim_i →∞m_i (𝐒)/i = 𝔼[ B ] almost surely .
Set Ω_B to be the set of ω∈Ω for which this limit holds.
For any ω∈Ω_𝐒∩Ω_𝐗∩Ω_B, we deduce that
H(𝐒)
= lim_k →∞ - 1/klog_2 p_𝐒_k(𝐒_k (ω))
= lim_i →∞ - 1/l_ilog_2 p_𝐒_l_i(𝐒_l_i (ω))
= lim_i →∞ - 1/l_ilog_2 ( 𝐗_i = T (𝐒_l_i (ω)) )
= - 1/𝔼 [B]lim_i →∞1/ilog_2 ( 𝐗_i = 𝐱_i )
= H(𝐗)/𝔼 [B] .
The first equality follows from (<ref>).
The second equality is a consequence of the fact that { l_i = m_i (𝐒(ω)) | i ∈ℕ} is an infinite subset of the natural numbers.
Since a subsequence of a convergent sequence must converge to the same limit, we immediately gather that this alternate form approaches H(𝐒).
The third equality is a consequence of the equivalence between the following two events,
{ω∈Ω | 𝐗_i (ω) = 𝐱_i}
= {ω∈Ω | T (𝐒_m_i (𝐒(ω)) ) = 𝐱_i} .
This is characteristic of the tokenization process, and it is a consequence of the correspondence described above.
The last step holds because we are considering an ω∈Ω_B.
The sets Ω_𝐒, Ω_𝐗, and Ω_B each have probability one; this implies that their intersection also has probability one,
Thus, we must conclude that
H(𝐒) = H(𝐗)/𝔼 [B] almost surely .
As a corollary to this result, any upper bound on H(𝐗) produces an upper bound on H(𝐒).
This is the property we wish to exploit.
Then,
from the results of <cit.>, we can see that
{ H() ≤lim_N_T →∞ -1/N_T∑_i=1^N_Tlog_2 q_i(X_i)
} = 1,
where q_i(·) is the output PMF from the language model.
Therefore, an asymptotic upper bound on the entropy rate H() is given by
H()≤lim_N_T →∞
-1/N_T∑_i=1^N_Tlog_2 q_i(X_i)/𝔼[B].
We refer to the expression in the right hand side of (<ref>) as the asymptotic upper bound on H() and denote it by H_ub.
The numerator in (<ref>) represents the average number of bits required to represent the tokens 𝐗_N_T and the denominator in (<ref>) is the average number of charcaters per token.
Hence, the unit for H() is bits/character.
In <cit.>, Cover and King provide 1.3 bits/character as an estimate of the asymptotic upper bound on H().
They also provide an extensive list of references and discussion of the literature on estimating the entropy of English prior to 1976.
Very recently, in <cit.>, the performance of several language models have evaluated on the text8 dataset using a metric called bits per character (bpc).
We believe bpc is the same as the asymptotic upper bound in this paper.
§.§ Encoding schemes
We consider three schemes for the lossless compression block in Fig. <ref>.
§.§.§ Compressing the ranks using zlib
The first scheme uses the zlib compression algorithm to encode the sequence of ranks. We refer to this scheme as LLaMA+zlib and denote the compression ratio of this scheme by ρ_LLaMA+zlib.
§.§.§ Token-by-Token Compression
The second scheme uses a token-by-token lossless compression scheme which uses a time-varying codebook to encode the token x_i at epoch i by using a prefix-free code assuming q_i to be the true distribution of the tokens.
A natural choice for a prefix-free code is a Huffman code.
Instead, for simplicity, we
use a prefix-free code where the codeword for the token x_i is of length l_i = ⌈log_2 1/q_i(x_i)⌉.
A prefix-free code with this length for x_i is guaranteed to exist since this choice of lengths satisfies the Kraft inequality <cit.>.
The compression ratio for this scheme, denoted by
ρ_LLaMA+TbyT, is given by
ρ_LLaMA+TbyT = ∑_i=1^N_T⌈log_2 1/q_i(x_i)⌉/∑_i=1^N_T b_i.
§.§.§ Arithmetic Coding
The above two schemes are intuitive but their performance can be improved.
A very effective way to combine the output of the LLM with a lossless compression scheme is by using arithmetic coding <cit.>.
Arithmetic coding is well suited to accept time-varying probabilities and we use q_i(x_i) as the probability of token x_i at time in the arithmetic coding scheme.
We refer to the compression ratio of this scheme as
ρ_LLM+AC.
It is known that arithmetic coding is nearly optimal as a compression scheme <cit.>.
Hence, the compression ratio for this scheme is expected to be
ρ_LLM+AC≈∑_i=1^N_Tlog_2 1/q_i(x_i)/∑_i=1^N_T b_i.
Clearly, ρ_LLaMA+zlib, ρ_LLaMA+TbyT, and
ρ_LLM+AC
also provide upper bounds on H().
H_ub, ρ_LLaMA+zlib, ρ_LLaMA+TbyT, and ρ_LLM+AC
are estimated using a finite number of tokens and the statistical properties of such an estimate should be kept in mind when interpreting the results, especially since the tokens are from a very large alphabet and language model has large memory.
§ RESULTS
We used LLaMA-7B <cit.> as the large language model and SentencePiece tokenizer <cit.>. The tokenizer produces a dictionary of size 32000.
Since the language model is trained on this tokenizer, it is imperative that this tokenizer be used in conjunction with the LLM.
It should be noted that the tokenizer and the model are trained on a large corpus of text which includes uppercase letters, special characters etc.
This is in contrast to many studies on estimating the entropy of English, where the input alphabet is restricted to lowercase letters such as in <cit.>.
This makes it difficult to perform an entirely fair comparison between these models.
By using a pretrained LLM on an input consisting only of lowercase letters, we may be unfair to the LLM.
Nevertheless, we used the text8 dataset available from
<http://mattmahoney.net/dc/text8.zip> to benchmark the performance of LLaMA-7B with compression against other state of the art results for text compression.
In <cit.>, it is mentioned that the ZPAQ algorithm obtains the best compression ratio for the text8 dataset with a compression ratio of 1.4 bits/character.
In <cit.>, the paq8h algorithm is shown to provide a compression ratio of 1.2 bits/character.
To the best of our knowledge, this appears to be best performance reported.
Therefore, we used these two algorithms as baselines.
We did not independently run the ZPAQ or paq8h algorithms and we are quoting results from the existing literature.
The performance of LLaMA-7B is shown in Table <ref> for 10 different batches each with 100,000 tokens. The average performance over these 1M tokens is also shown in the last row in the Table.
It can be seen that using LLaMA-7B with Arithmetic Coding compression results in a compression ratio of 0.7101 bits/character.
This is substantially better than the state-of-the-art results mentioned in <cit.> or <cit.> and is very close to our computed upper bound.
The performance with the LLaMA+zlib algorithm and LLaMA+TbyT compression are also better than that of the known state-of-the-art results.
Table <ref> also shows the upper bound in (<ref>).
It should be noted that the upper bound on the entropy is lower than that computed by Shannon in <cit.>, Cover and King in <cit.> and more recent estimates based on neural networks in <cit.>.
The dependence of the compression performance on the memory of the LLM (M) is shown in Table <ref>.
As expected, the compression performance improves with increasing M. We also observed that the inference time scaled approximately linearly with the input memory length, i.e., batches with a memory of 511 tokens ran about 16 times slower than batches with a memory of 31 tokens.
It is well known that the estimate of compression ratio can show substantial variance depending on the input text and hence, the results should be interpreted with caution. The empirical mean and standard deviation of the entropy bounds and compression ratios computed using 10 batches of 100,000 tokens are shown in Table <ref>.
We were also not able to run LLaMA-7B on the entire 100MB of the text8 dataset. So, the comparison of LLaMA-7B with that of the state-of-the-art corresponds to estimates obtained from different input sizes.
It appears that the LLaMA-7B model was trained on a corpus that included articles from Wikipedia.
Since the text8 dataset is derived from Wikipedia, it is likely that our results for the text8 dataset are optimistic.
Therefore, we also tested the performance of LLaMA-7B on a recently released (May 25, 2023) book <cit.> under Project Gutenberg. We extracted text that corresponds to 100,000 tokens. We applied the same text pre-processing as used in the text8 dataset to clean the text from the book. The resulting text data contained only lowercase letters and space as in the text8 dataset.
Table <ref> shows the compression performance of the LLM on the book.
It can be seen that the compression ratios and the entropy upper bound are slightly higher compared to the performance on the text8 dataset; nevertheless, the asymptotic upper bound on the entropy is lower than that of currently known models given in <cit.>).
Similarly, the compression ratio of LLaMA-7B-based compressors are better than those of known state-of-the-art results for the text8 dataset.
The compression ratio for LLaMA with arithmetic coding is only 0.8426 bits/character and is very close to the estimated upper bound on H().
To provide some insight into the comparative performance of LLaMA based compressors vis-a-vis standard text compressors, we also ran the zlib algorithm directly on the input text.
The resulting compression ratio was 2.8 bits/character (shown in the last column).
It is clear that the performance of LLaMA based compressors is substantially better than this.
The zlib algorithm may not be optimized for compressing small text samples and hence, the compression ratio for the zlib algorithm and the LLaMA+zlib will likely improve on longer texts.
§ ACKNOWLEDGEMENT
We would like to thank Andreas Kirsch for an email discussion about arithmetic coding that motivated us to add our results on arithmetic coding in a timely manner.
lo
§ SCRATCH
Define probability distribution p_𝐗_n by
p_𝐗_n (𝐱_n) = ( 𝐗_n = 𝐱_n ) ,
and extend this notation in a natural way to subsequences and conditional events.
Then, for any fixed sequence 𝐱 and any natural number ℓ≤ n, we have
p_𝐗_n(𝐱_n)
= p_𝐗_1:ℓ, 𝐗_ℓ+1:n(𝐱_1:ℓ, 𝐱_ℓ+1:n)
= p_𝐗_1:ℓ(𝐱_1:ℓ)
p_𝐗_ℓ+1:n | 𝐗_1:ℓ(𝐱_ℓ+1:n | 𝐱_1:ℓ)
≤ p_𝐗_ℓ(𝐱_ℓ) .
Let integer k be fixed.
Consider time instants l = m_k (𝐱) and l = m_k+1 (𝐱) - 1.
Then, we can write
p_𝐗_l(𝐱_l)
= ( 𝐗_l = 𝐱_l ) = ( 𝐒_k = 𝐬_k )
≥( 𝐗_l = 𝐱_l) = p_𝐗_l(𝐱_l) .
It follows, that, for any such sequence 𝐱, we have
- log_2 p_𝐱_l(𝐱_l)
= - log_2 ( 𝐒_k = 𝐬_k )
≤ - log_2 p_𝐱_l(𝐱_l) .
Then, we note that
log_2 p ( 𝐗_1^n )
= log_2 p(X_1, …, X_n)
= log_2 p(X_n | X_n-1, …, X_1) p(X_n-1 | X_n-1, …, X_1) ⋯ p(X_1)
= log_2 p(X_n | X_n-1, …, X_1) + log_2 p(X_n-1 | X_n-2, …, X_1) + ⋯ + log_2 p(X_1)
= log_2 p ( X_n | 𝐗_1^n-1) + log_2 p ( X_n-1 | 𝐗_1^n-2) + ⋯ + log_2 p(X_1)
With this notation, we can write
H ( 𝐗_1^n ) = - 𝔼[ log_2 p ( 𝐗_1^n ) ]
= - 𝔼[ log_2 p ( X_n | 𝐗_1^n-1) ]
- 𝔼[ log_2 p ( X_n-1 | 𝐗_1^n-2) ]
- ⋯ - 𝔼[ log_2 p(X_1) ]
= - 𝔼_𝐗_1^n-1[ 𝔼_X_n | 𝐗_1^n-1[ log_2 p ( X_n | 𝐗_1^n-1) ] ]
- 𝔼_𝐗_1^n-2[ 𝔼_X_n-1 | 𝐗_1^n-2[ log_2 p ( X_n-1 | 𝐗_1^n-2) ] ]
- ⋯ - 𝔼[ log_2 p(X_1) ]
≤ - 𝔼_𝐗_1^n-1[ 𝔼_X_n | 𝐗_1^n-1[ log_2 p̂( X_n | 𝐗_n-512^n-1) ] ]
- 𝔼_𝐗_1^n-2[ 𝔼_X_n-1 | 𝐗_1^n-2[ log_2 p̂( X_n-1 | 𝐗_n-513^n-2) ] ]
- ⋯ - 𝔼[ log_2 p(X_1) ]
H(𝐗) = lim_n →∞1/n H ( 𝐗_1^n )
= lim_n →∞1/n∑_k=1^n H ( X_k | 𝐗_1^k-1)
= lim_n →∞1/n∑_k=1^n H ( p_x_k | x_1:k-1)
≤lim_n →∞1/n∑_k=1^n ( H ( p_x_k | x_1:k-1)
+ D_KL( p_x_k | x_1:k-1, p̂_x_k | x_k-512:k-1) )
= - lim_n →∞1/n∑_k=1^n log_2 p̂_x_k | x_k-512:k-1( X_k | 𝐗_k-512^k-1)
( 1 - 1/nlog_27 S_n ) log_2 27 → H (𝐗)
1/n( n - log_27 S_n ) log_2 27 → H (𝐗)
log_2 27 - 1/nlog_2 S_n → H (𝐗)
H() = lim_N_c →∞H(S_1,S_2,…, S_N_c)/N_c
= lim_N_c →∞ H(S_N_c | S_N_c-1, …, S_1) .
Since there is a one-to-one mapping between and , p(X_1,X_2,…,X_N_T) = p(S_1,S_2,…, S_N_c) and, hence,
H() = lim_N_c →∞H(S_1,S_2,…, S_N_c)/N_c
= lim_N_T →∞N_T/N_cH(X_1,X_2,…, X_N_T)/N_T
= ( lim_N_c →∞N_T/N_c)
( lim_N_T →∞H(X_N_T, X_N_T-1, …, X_1)/N_T)
= ( lim_N_T →∞N_T/∑_i=1^N_T B_i)
( lim_N_T →∞ H(X_N_T | X_N_T-1, …, X_1) )
(a)=H() /𝔼 [B]
= lim_N_T →∞H(X_N_T | X_N_T-1, …, X_1)/∑_iB_i/N_T
= lim_N_T →∞H(X_N_T | X_N_T-1, …, X_1)/lim_N_T →∞∑_iB_i/N_T
= H()/lim_N_T →∞∑_iB_i/N_T.
Equality (a) follows from the fact that f(x) = 1/x is uniformly continuous on [1, ∞) and 1 ≤∑_i=1^N_T B_i/N_T < ∞.
IEEEbib
|
http://arxiv.org/abs/2306.08573v1
|
20230614152209
|
Dressing effects in laser-assisted ($e,2e$) process in fast electron-hydrogen atom collisions in an asymmetric coplanar scattering geometry
|
[
"Gabriela Buică"
] |
physics.atom-ph
|
[
"physics.atom-ph",
"physics.plasm-ph"
] |
[email protected]
Institute of Space Science, P.O. Box MG-36, Ro 77125,
Bucharest-Măgurele, Romania
We present the theoretical treatment of laser-assisted (e, 2e) ionizing collisions
in hydrogen for fast electrons, in the framework of the first-order Born approximation at
moderate laser intensities and photon energies beyond the soft-photon approximation.
The interaction of the laser field with the incident, scattered, and ejected electrons is
treated nonperturbatively by using Gordon-Volkov wave functions, while the atomic
dressing is treated by using first-order perturbation theory.
Within this semi-perturbative formalism we obtain a new closed formula for the nonlinear
triple differential cross section (TDCS), which is valid for linear as well circular
polarizations. New analytical simple expressions of TDCS are derived in the weak field
domain and low-photon energy limit.
It was found that for non-resonant (e,2e) reactions the analytical formulas
obtained for the atomic matrix element in the low-photon energy limit give a good
agreement, qualitative and quantitative, with the numerical semi-perturbative model
calculations.
We study the influence of the photon energy as well of the kinetic energy of the ejected
electron on the TDCS, in the asymmetric coplanar geometry, and show that the dressing of
the atomic target strongly influences the (e, 2e) ionization process.
34.80.Qb, 34.50.Rk, 03.65.Nk, 34.80.Dp
Dressing effects in laser-assisted (e,2e) process in fast electron-hydrogen
atom collisions in an asymmetric coplanar scattering geometry
Gabriela Buică
July 31, 2023
===========================================================================================================================================
§ INTRODUCTION
It is well known that the study of the atomic ionization process by collisions with
electrons, the so-called (e, 2e) reaction, reveals information about the
electronic structure of the atomic target and residual ion <cit.>, and is of
interest in collision theory or in other fields such as
plasma physics or astrophysics, which need reliable scattering cross section data
<cit.>.
Camilloni and coworkers <cit.> were the first to use (e,2e) reaction
as a tool for measuring the momentum distribution of the ejected electrons,
in a coplanar symmetric scattering geometry where the outgoing electrons have equal
energies and polar angles, at high incident and outgoing electron energies.
Since then, an increasing number of (e, 2e) experiments have been performed over the
years for different target atoms and for various kinematical configurations, and the
electron momentum spectroscopy (EMS) has been developed to provide
information on the electronic structure of atoms and molecules <cit.>.
The symmetric (e,2e) reaction is the basis of EMS, also known as binary (e,2e)
spectroscopy, and is kinematically characterized by a large momentum transfer of the
projectile electron and a small momentum of the residual ion.
Another useful scattering configuration is the coplanar asymmetric geometry with
fast incident electrons (keV) and ejected electrons of low and moderate energies, where
most of the (e, 2e) reactions occur <cit.>.
In the past few decades the electron-impact ionization of an atom in the presence of a
laser field has become increasingly interesting and it is often referred to as the
laser-assisted (e,2e) collision <cit.>.
Recently, Hø̈hr and coworkers <cit.> performed the first
kinematically-complete experiment for laser-assisted ionization in electron–helium
collisions at high incident electron energy (1 keV) and showed significant differences
of
the triple differential cross section (TDCS) in comparison to the field-free
cross-sections.
Very recently, Hiroi and coworkers <cit.> reported the observation the
laser-assisted electron-impact ionization of Ar in an ultrashort intense laser field, and
showed that the signal intensity of the laser-assisted process for one-photon absorption
obtained by integrating the signals over the detection angle ranges is about twice as
large as that estimated by previous theoretical calculations in
which the atomic dressing by the laser field is neglected <cit.>.
A large number of papers have been published so far and several theoretical
approaches have been proposed, involving ejected electrons of low energies that are
studied under the combined influence of the laser field and the Coulomb field of the
residual ion.
The early theoretical works on laser-assisted (e, 2e) scattering have neglected
the dressing of the atomic target by the laser field or have used the closure
approximation for laser-atom interaction.
First, Jain and Tzoar introduced the Coulomb-Volkov wave functions <cit.>, which
takes into account the influence of the Coulomb field of the nucleus on the final
electron state.
Since then, the effect of the Coulomb interaction in the laser-assisted (e, 2e)
collisions on hydrogen atom was studied in several papers by employing different types of
final state wave functions like the Coulomb-Volkov or Coulomb corrected Gordon-Volkov
wave functions. Banerji and Mittleman <cit.> calculated TDCS for ionization
of hydrogen by electron impact, at low photon energies, in
which the slow ejected electron was described by a modified Coulomb wave function, and
the laser-electrons interactions were included in the low-frequency approximation.
Cavaliere and coworkers <cit.> studied laser-assisted (e, 2e)
collisions in hydrogen at low photon energies, high incident electron energies, and
ejected electrons with moderate as well as small energies, in the first-order
Born approximation, with the incident and scattered electrons described by
the Gordon-Volkov wave functions <cit.>, while the ejected electron is
represented by a modified Coulomb wave function.
Later on, the dressing of the atomic target by the laser field has been included in the
first-order time-dependent perturbation theory (TDPT), and therefore the influence of the
laser parameters such as intensity, polarization, and photon energy has attracted a lot
of interest from the theoretical point of view.
Joachain and coworkers <cit.> extended the semi-perturbative theory
of Byron and Joachain <cit.>, and showed the strong influence of a laser field on
the dynamics of laser-assisted (e,2e) collisions in hydrogen, for fast incident and
scattered electrons and slow ejected electrons, in the Ehrhardt
asymmetric coplanar geometry <cit.>.
For (e,2e) collisions in hydrogen with slow ejected electrons, Martin and coworkers
<cit.> analyzed the influence of the laser parameters: photon energy,
laser intensity, and polarization direction on the angular distribution of the
ejected electrons.
The influence of laser polarization has also been discussed by Taïeb and coworkers
<cit.>, who developed a dressed atomic wave functions on a basis of Sturmian
functions, which allowed to take into account accurately the contribution of the
continuum spectrum to the dressing of the atomic states <cit.>.
Very recently, Makhoute and coworkers <cit.> presented their numerical results
obtained for (e, 2e) collision in atomic hydrogen in the symmetric and asymmetric
coplanar scattering geometries, at large photon energies. For the direct scattering
channel the calculation of the specific radial amplitudes was performed by expanding the
atomic wave functions in a Sturmian basis, whereas the closure approximation was employed
for the exchange channel.
As mentioned before most of these previous theoretical works were focused on scattering
geometries involving slow ejected electrons, and only recently it was shown for ejected
electrons of high energies that the laser field strongly modifies the (e,2e)
collisions.
New theoretical studies for laser-assisted EMS at high impact energy and large
momentum transfer were published and it was found that the atomic dressing, calculated
in the closure- and low-frequency approximations, substantially influences the
laser-assisted TDCSs at low <cit.> and large photon energies
<cit.>.
The purpose of the present paper is to study the laser-assisted (e, 2e) reactions in
hydrogen, in which the target atom is ionized in collision with an electron beam
in the presence of a laser field, for fast incident and outgoing electrons, in an
asymmetric coplanar scattering geometry, beyond the soft-photon approximation.
We present a new method to derive the relevant atomic transition amplitude which takes
into account the dressing of the target by the laser field.
The laser field alone cannot significantly ionize the hydrogen atom since the photon
energy is considered below the ionization threshold and the laser intensity is not high
enough to allow ionization through a multiphoton process.
We assume fast scattered electrons of sufficiently high velocity, such that we neglect
their interaction with the Coulomb field of the remaining ion.
Similar to the approach used in the Keldysh-Faisal-Reiss approximation
<cit.>, the
influence of the remaining ion on the final state of the fast ejected electron is
neglected, since the residual Coulomb field is weak compared to the laser field strength.
We follow the approach of Ref. <cit.> in which the semi-perturbative theory
<cit.> was generalized to laser-assisted fast (e, 2e) collisions in atomic hydrogen.
In order to simplify the calculations we introduce several assumptions:
(a) It is reasonable to employ a first-order Born treatment of the projectile-atom
interaction, since we consider fast nonrelativistic collisions such that the velocities
of the projectile and outgoing electrons are much larger than the atomic unit
<cit.>.
(b) The non-relativistic Gordon-Volkov solutions are used for the incident and outgoing
electrons to describe their interaction with the laser field.
(c) The laser field intensity is considered moderate, but much weaker than the atomic
unit (3.51 × 10^16 W/cm^2), in order to avoid direct one- and multiphoton
ionization.
In contrast to other theoretical works we take into account the atomic dressing effects
in the first-order TDPT in the laser field, going beyond the soft-photon approximation.
The photon energy is considered below the ionization threshold of the hydrogen atom,
and one-photon resonance transitions are allowed between the ground and excited states.
(d) Since the scattered and ejected electrons have high energies and of comparable order
of magnitude, our semi-perturbative formalism takes into account the exchange effects
in the first-order Born approximation.
The manuscript is organized as follows.
In Sec. <ref> we present the theoretical method used in laser-assisted ionization of
atomic hydrogen by electron impact, and derive new analytical formulas for the
ionization transition amplitudes and TDCSs by electron impact.
In the low-photon energy limit we provide simple analytic formulas of TDCSs, in a closed
form, for the laser-assisted (e,2e) ionization process which include the atomic
dressing effects.
Numerical results are presented in Sec. <ref>, where the TDCSs for laser-assisted
electron impact ionization of hydrogen are analyzed as a function of the scattering angle
of the ejected electron and as a function of the photon energy.
We have studied the modifications of the angular distributions of the ejected electrons
due to the external laser field at different ejected electron energies and photon
energies.
Finally, summary and conclusions are given in Sec. <ref>.
Atomic units (a.u.) are employed throughout this manuscript, unless otherwise specified.
§ SEMI-PERTURBATIVE THEORY
The laser-assisted scattering of electrons by hydrogen atoms in a laser field in which
the atomic target is ionized, the so-called laser-assisted (e, 2e) reaction, can be
symbolically represented as:
e^-(E_i,𝐤_i) + H(1s) +N_i γ (ω, ε)
→
e^-(E_f,𝐤_f) + e^-(E_e,𝐤_e) + H^+ + N_f γ (ω,
ε),
where E_i and E_f, and 𝐤_i (θ_i,φ_i) and
𝐤_f (θ_f,φ_f) represent the kinetic energy and the momentum
vector of the incident and scattered projectile electrons, respectively, while
E_e and 𝐤_e (θ_e,φ_e) are the kinetic energy and the
momentum vector of the ejected electron, as plotted in Fig. <ref>.
Here γ (ω, ε) denotes a photon with the energy ω and
the unit polarization vector ε, and N = N_i-N_f is the net
number of exchanged photons between the projectile-atom scattering system and the laser
field.
The laser field is treated classically, and within the dipole approximation is described
as a monochromatic electric field,
E (t) =
(i/2) E_0 e^-i ω t ε + c.c.,
where E_0 represents the amplitude of the electric field.
The magnetic vector potential, A(t), is simply calculated from
E (t) = -∂_t A(t), as
A(t) = ( E_0/ω)
[ cosω t cos (ξ/2) 𝐞_j
+ sinω t sin (ξ/2) 𝐞_l],
where
ε = cos(ξ/2) 𝐞_j + i sin(ξ/2) 𝐞_l
is the polarization vector of the laser beam, with 𝐞_j and 𝐞_l
two different unit vectors along different orthogonal directions.
ξ represents the degree of ellipticity of the laser field which varies in the
range -π/2 ≤ξ≤π/2, and determines the ellipticity of the field.
The value ξ=0 corresponds to a linearly polarized (LP) laser field, while
ξ=π/2 corresponds to a left-hand circularly polarized (CP) laser field.
§.§ Laser-dressed electronic and atomic wave functions
As mentioned before, we consider that the external laser field has a dominant influence
and neglect the Coulomb interaction between fast outgoing electrons and residual ion in
the scattered and ejected electron wave functions <cit.>.
At sufficiently high projectile kinetic energies, it is well known that the first-order
Born approximation in the scattering potential can be used to describe the electron impact
ionization process <cit.>.
We assume fast incident and outgoing electrons with kinetic energies much larger than the
energy of a bound electron in the first Bohr orbit <cit.>,
since for the field-free (e,2e) reaction in e-H collisions it is well known that the
plane wave approximations agree well with experiment at kinetic energies above 200 eV
<cit.>.
Thus, in the non-relativistic regime, as long as both E_f ≫ 1 a.u. and E_e ≫
1 a.u., we describe the fast scattered and ejected electrons by Gordon-Volkov wave
functions <cit.>.
We should mention that the use of a Coulomb-Volkov wave function provides a more accurate
treatment at small impact kinetic energies, where the effect of the proton's potential on
the incoming and outgoing electrons is important <cit.>.
In order to avoid the direct one- and multiphoton ionization processes, we consider that
the electric field amplitude is weak with respect to the atomic unit of electric
field strengths, E_0≪ 5.1 × 10^9 V/cm, i.e. the strength of the
laser field is much lower than the Coulomb field strength experienced by an electron in
the first Bohr orbit.
Therefore, we describe in a nonperturbative way the initial and final states of the
projectile electron, as well the final state of the ejected electron interacting with a
laser field by non-relativistic Gordon-Volkov wave functions <cit.>,
expressed in the velocity gauge as
χ_𝐤^V (𝐫,t)=
(2π )^-3/2exp[ i𝐤·𝐫
-i 𝐤·α(t) -i E_k t - i/2∫^t dt' A^2(t')
]
,
where 𝐫 and 𝐤 represent the position and momentum vectors, and
E_k = k^2/2 is the kinetic energy of the electron.
α(t) = ∫^t dt' A(t') describes the classical oscillation motion of
a free electron in the electric field defined by Eq. (<ref>), and by using Eq.
(<ref>) we obtain
α(t) = α_0[ 𝐞_jsinω t cos (ξ/2)
+ 𝐞_lcosω t sin (ξ/2)
],
where α_0 = √(I)/ ω^2 is the amplitude of oscillation, and I= E_0^2 denotes the laser intensity.
Obviously, as noticed from Eq. (<ref>), at moderate field strengths the largest
effect of the laser field on the free-electron state is determined by a dimensionless
parameter k α_0, that depends on the electron and photon energies, and laser
intensity. For example, a laser intensity of 1 TW/cm^2, a photon energy of
3.1 eV, and an electron kinetic energy of 200 eV result in a value of k α_0
≃ 1.58, while the ponderomotive energy acquired by an electron in the electric
field U_p= I/ 4 ω^2 is about 0.015 eV, and, therefore can be safely
neglected compared to the photon and unbound electrons energies employed in the present
paper.
The interaction of the hydrogen atom, initially in its ground state, with a laser
field at moderate field strengths is considered within the first-order TDPT.
An approximate solution for the wave function of an electron bound to a Coulomb potential
in the presence of an electric field, also known as the dressed wave function, is
written as
Ψ_1s( 𝐫_1, t) =
[
ψ_1s^(0) (𝐫_1,t) + ψ_1s^(1)(𝐫_1,t)
]
exp[-i E_1t -i/2∫^t dt' A^2(t')]
,
where 𝐫_1 is the position vector of the bound electron, ψ_1s^(0)
is the unperturbed wave function of the hydrogen atom ground state, and ψ_1s^(1)
represents the first-order perturbative correction to the atomic wave function due to the
external laser field.
We employ the following expression of the first-order correction in the velocity gauge,
ψ_1s^(1), as described by Florescu and Marian in Ref. <cit.>,
ψ_1s^(1)(𝐫_1,t) =-
α_0 ω/2[
ε·𝐰_ 100(E^+_1;𝐫_1 ) e^-iω t +
ε^* ·𝐰_ 100(E^-_1;𝐫_1 ) e^iω t]
,
with the linear-response vector, w_100, defined by
w_100(E_1^± ;𝐫_1)
= -G_C(E_1^±) P ψ_1s(𝐫_1),
where P denotes the momentum operator of the bound electron, and G_C is the
Coulomb Green's function.
For the hydrogen atom in its ground state the linear-response vector was expressed in
Ref. <cit.> as
w_100(E_1^± ;𝐫_1)
= i (4 π)^-1/2 B_101 (E_1^±;r_1) r̂_1
,
where r̂_1=r_1/r_1, and the energies E_1^+ and
E_1^- take the following values
E_1^+ = E_1 + ω +i 0, E_1^- = E_1 - ω,
with E_1 =-13.6 eV representing the energy of the ground state.
The radial function B_101 in Eq. (<ref>) was evaluated <cit.> using
the Schwinger's integral representation of the Coulomb Green's function in momentum space
including both bound and continuum eigenstates, and can be expressed in terms of Humbert
function, Φ_1, as
B_101 (τ;r_1)=2 τ/2-τ( 2/1+τ)^2+τr_1 e^-r_1/τΦ_1(2-τ,-1-τ,3-τ, ξ_1, η_1),
where the parameter τ takes two values τ^±= 1/√(-2 E_1^±), and
the variables of the Humbert function are ξ_1 = (1-τ)/2 and η_1
=(1-τ) r_1/τ.
§.§ The nonlinear scattering matrix
We employ a semi-perturbative approach of the scattering process which is similar to that
developed by Byron and Joachain <cit.> for free-free transitions, in which the
second-order Born correction is negligible compared to the laser-dressing effects.
The evaluation of the scattering amplitude is very challenging due to the
complex three-body interaction: projectile electron, bound electron, and laser field.
However, since we assume that both scattered and ejected electrons have large kinetic
energies the calculation simplifies, and we can derive a closed form expression for the
TDCS.
Thus, the initial state of the scattering system is calculated as the product of the
initial states of the fast incident electron and atomic target dressed by the laser field,
χ_𝐤_i^V( 𝐫_0, t) and Ψ_1s(𝐫_1,t), while
the final state is calculated as the product of the final states of the fast scattered
and ejected electrons, which are approximated as Gordon-Volkov wave functions.
Our treatment differs from that of Taïeb and coworkers <cit.> in the fact
that we dress the fast projectile and ejected electrons to all order in the laser field
and we dress the atomic target by using an atomic wave function corrected to the first
order in the laser field <cit.>.
As mentioned before, we focus our study at moderate laser intensities (I ≤
1 TW/cm^2) and fast projectile electrons (E_i, E_f≥ 1 keV) such that
the interaction between the projectile electron and hydrogen atom is well treated within
the first-order Born approximation in the static scattering potential
V_d(r_0, r_1)=-1/r_0+ 1/|r_1-r_0| for the direct channel, and
V_ex(r_0, r_1)=-1/r_1+ 1/|r_1-r_0| for the exchange channel.
In order to describe the scattering process (<ref>) we employ the direct and
exchange scattering matrix elements <cit.>, which are calculated at high
kinetic energies of the projectile and ejected electrons as
S_fi,d^B1 = -i ∫_-∞^+∞ dt ⟨χ_𝐤_f^V( 𝐫_0,t)
χ_𝐤_e^V(𝐫_1,t) |V_d(r_0,r_1)|
χ_𝐤_i^V( 𝐫_0, t) Ψ_1s(𝐫_1,t)
⟩,
S_fi,ex^B1 =
-i ∫_-∞^+∞ dt ⟨χ_𝐤_f^V( 𝐫_1,t)
χ_𝐤_e^V(𝐫_0,t) |V_ex(r_0,r_1)|
χ_𝐤_i^V( 𝐫_0, t) Ψ_1s(𝐫_1,t)
⟩
,
where χ_𝐤_i(f)^V and χ_𝐤_e^V, given by Eq. (<ref>),
represent the Gordon-Volkov wave functions of the projectile and emitted electrons
embedded in the laser field, whereas Ψ_1s, given by Eq. (<ref>),
represents the wave function of the bound electron interacting with the laser field.
By using the Jacobi-Anger identity <cit.>,
e^-i x sinω t≡∑_N=-∞^+∞ J_N(x) e^-i N ω t,
we expand the oscillating part of the Gordon-Volkov wave functions occurring in the
scattering matrix elements, Eqs. (<ref>) and (<ref>), in terms of the
ordinary Bessel functions of the first kind, J_N, as
exp[-i 𝐪·α(t)] =
∑_N=-∞^+∞ J_N( R_q) e^-i N ω t +i N ϕ_q,
where the argument of the Bessel function is defined by
R_q= α_0|ε·𝐪 |,
and ϕ_q represents the dynamical phase which is calculated as
e^ i ϕ_q = ε·𝐪/|ε·𝐪|,
where 𝐪 = 𝐤_𝐢 - 𝐤_𝐟 - 𝐤_𝐞 denotes the recoil
momentum vector of the ionized target, H^+.
Clearly, for a CP laser field a change of helicity, i.e. ε→ε^*, leads to a change of the sign of the dynamical phase, ϕ_q→
-ϕ_q in the TDCS, while for a LP laser field e^ i ϕ_q = ± 1,
and ϕ_q=n π with n an integer.
For the direct channel, by replacing Eqs. (<ref>), (<ref>), and (<ref>) into
Eq. (<ref>), we obtain the scattering matrix for electron-hydrogen collisions in a
laser field, after performing the integration with respect to time,
S_fi,d^B1 =- 2π i ∑_N=N_min^+∞δ( E_f +E_e - E_i - E_1 - N ω) T_N,d ,
where the Dirac function, δ, assures the energy conservation which implies that
the kinetic energy of the scattered electron is determined by the relation
E_f = E_i + E_1 - E_e + N ω.
Here the kinetic energy of the residual ion, E_q =q^2/2m_p, has been
neglected in comparison to any of the electrons kinetic energies, E_j, (j=i, f, and
e), since the mass of the residual ion (proton) is much larger than the electron mass.
The energy spectrum of the scattered electron consists of an elastic line, N=0,
and a number of sidebands corresponding to the positive and negative values of N.
Obviously, for a given value of the ejected electron energy, E_e, the net number of
exchanged photons is limited and cannot be smaller than a minimal value that is the
integer of N_min =( E_e -E_i - E_1 )/ω.
The total nonlinear transition amplitude, T_N,d, for the laser-assisted (e,2e)
ionization process in the direct channel can be split as a sum of two terms
T_N,d = T^(0)_N,d + T^(1)_N,d ,
where T^(0)_N and T^(1)_N represent the electronic and atomic transition
amplitudes. The first term on the right-hand side of the total transition amplitude Eq.
(<ref>), T^(0)_N, is the transition amplitude due to projectile
electron contribution, in which the atomic dressing terms are neglected,
T^(0)_N,d = 1/2^5/2π^7/2 e^ i N ϕ_q/Δ^2
J_N( R_q) ∫ d𝐫_1
e^-i 𝐤_e ·𝐫_1
( e^i Δ·𝐫_1 - 1)
ψ_1s (𝐫_1)
,
where the integration over the projectile coordinate, r_0, was performed using
the Bethe integral, and Δ= 𝐤_𝐢 - 𝐤_𝐟 is the vector of
momentum transfer from the incident to the scattered electron.
After performing the radial integration with respect to r_1 in Eq.
(<ref>), the electronic transition amplitude can be simply expressed as
T^(0)_N,d =- 1/(2π)^2
J_N( R_q) f_ion^B_1(Δ,q,k_e) e^ i N ϕ_q,
where
f_ion^B_1(Δ,q,k_e ) = -2^5/2/πΔ^2[1/(q^2+1)^2 - 1/(k_e^2+1)^2],
is the direct scattering amplitude in the first-order plane-wave Born approximation for
ionization of hydrogen atom by electron impact in the absence of the laser field
<cit.>.
In the electronic transition amplitude, Eq. (<ref>),
the interaction between the laser field and the projectile and ejected electrons is
contained in the argument of the Bessel function
R_q=(√(I)/ω^2)|ε·𝐪 |,
and phase ϕ_q, being decoupled from the kinematic term.
This feature is a characteristic of employing Gordon-Volkov wave functions for fast
electrons and moderate laser intensities <cit.>.
The field-free electronic scattering amplitude f_ion^B_1
contains a factor, -2/ Δ^2, which is related to the first-order Born amplitude
corresponding to scattering by the Coulomb potential -1/r_0, while the
two terms in the squared brackets of Eq. (<ref>) are related to the
momentum transfer to the residual ion, q, and the momentum of the ejected electron,
k_e, respectively.
For the field-free (e, 2e) collisions in the plane-wave Born approximation, the first
term in the right-side-hand of Eq. (<ref>) gives rise to the so-called binary
encounter peak <cit.>, which occurs at very low residual ion momentum q ≃
0.
The second term on the right-hand side of Eq. (<ref>), T^(1)_N, represents
the first-order atomic transition amplitude and corresponds to processes in which the
hydrogen atom absorbs or emits one photon and is subsequently ionized by the projectile
electron impact.
T^(1)_N occurs due to modification of the atomic state by the laser field, the
so-called atomic dressing, which is described by the first-order radiative
correction, ψ^(1)_1s(𝐫_1,t), in Eq. (<ref>).
After some straightforward algebra, integrating over the projectile coordinate,
r_0, the direct first-order atomic transition amplitude can be written as
T^(1)_N,d = - α_0ω/2[ J_N-1( R_q) M_at^(1) ( ω ) e^i (N-1)ϕ_q+
J_N+1( R_q) M_at^(1) ( -ω ) e^ i(N+1) ϕ_q] ,
where M_at^(1)(ω) denotes the specific first-order atomic transition
matrix element related to one-photon absorption,
M_at^(1)(ω) =
1/2^5/2π^7/2Δ^2∫ d 𝐫_1
e^-i 𝐤_e ·𝐫_1 (
e^ i Δ·𝐫_1 - 1)
ε·𝐰_ 100(E^+_1; 𝐫_1),
whereas the transition matrix element M_at^(1) ( -ω ) is related to
one-photon emission
M_at^(1)(-ω) =
1/2^5/2π^7/2Δ^2∫ d 𝐫_1
e^-i 𝐤_e ·𝐫_1 (
e^ i Δ·𝐫_1 - 1) ε^* ·𝐰_ 100(E^-_1; 𝐫_1)
,
where the energies E_1^± are given in Eq. (<ref>).
Obviously, in Eq. (<ref>) only one photon is exchanged (emitted or absorbed)
between the laser field and the bound electron, while the remaining N + 1 or N -
1 photons are exchanged between the laser field and the projectile electron.
By performing the radial integral over 𝐫_1 in Eq. (<ref>) we derive the
first-order atomic matrix element for one-photon absorption as,
M_at^(1)(ω) =
- 1/2^3/2π^3Δ^2[
(ε·𝐪̂) J_101(ω, q ) -
(ε·𝐤̂_e ) J_101(ω, -k_e )
] ,
while the following changes are made ω→ -ω and
ε→ε^* in Eq. (<ref>) to obtain the first-order
atomic transition matrix element, M_at^(1) ( -ω), for one-photon
emission.
The expression of the atomic radial integral J_101, is given by
J_101(±ω, p ) =
∫_0^∞ dr_1 r_1^2 j_1(p r_1) B_101 ( E_1^±; r_1 )
,
with J_101( ω, -p ) = - J_101( ω, p ), where p=q or k_e.
After performing some algebra in Eq. (<ref>), by using the expansion of the
spherical Bessel function, j_1, an analytical form of the radial integral is
obtained in terms of two Appell's hypergeometric function, F_1, as
J_101(ω,p) = 2^6τ/p (2-τ)(1+τ)^4
Re[ a^3 F_1(b,1,3,b+1,x,y) - ia^2/2p F_1(b,2,2,b+1,x,y) ] ,
in which a=τ/(1+i p τ ), b=2-τ, and the variables of the Appell's
hypergeometric function are
x =τ-1/τ+1,
y =(1-τ)(1-i p τ)/(1+τ)(1+i p τ) ,
where the parameter τ depends on the photon energy, ω, and it takes two
values, τ^- =1/ √(-2 E_1^+ ) and τ^+ =1/ √(-2 E_1^- ),
corresponding to the two energies E_1^+ and E_1^- defined in Eq. (<ref>).
The first-order atomic matrix element, Eq. (<ref>), has a structure that
explicitly contains the scalar products ε·𝐪̂ and
ε·𝐤̂_e, depends on the scattering geometry,
being written in a closed form that allows us to analyze the dependence on the
laser field polarization.
The last term in the right-hand side of the electronic scattering amplitude and atomic
matrix element, Eqs. (<ref>) and (<ref>), occurs due to the non-orthogonality
of the Gordon-Volkov wave function of the ejected electron and the initial ground-state
wave function of the hydrogen atom.
The structure of Eq. (<ref>) is also similar to other processes, with the vectors
𝐪 and 𝐤_e replaced by vectors which are specific to each
particular process, such as elastic laser-assisted scattering of electrons by
hydrogen atoms <cit.>, bremsstrahlung cross sections in the
electron-hydrogen atom collisions <cit.>, or laser-assisted electron-impact
excitation of hydrogen atoms <cit.>.
§.§ The nonlinear scattering matrix for exchange scattering
Our formalism does not neglect the exchange effects between the scattered and ejected
electrons in both the electronic and atomic terms, since fast incident and outgoing
electrons are involved in the calculation, and, as in the EMS experiments their
kinetic energies could have comparable orders of magnitude.
In the first-order Born approximation in the exchange potential, V_ex, we obtain the
exchange scattering matrix for the laser-assisted (e,2e) reactions, after performing
the integration with respect to time in Eq. (<ref>),
S_fi,ex^B1 =- 2π i ∑_N=N_min^+∞δ( E_f +E_e - E_i - E_1 - N ω) T_N,ex ,
where T_N,ex =T^(0)_N,ex +T^(1)_N,ex.
The electronic transition amplitude for the exchange scattering, T^(0)_N,ex, in
which the atomic dressing contribution is neglected, can be expressed as
T^(0)_N,ex =- 1/(2π)^2
J_N( R_q) g_ion,ex^B_1(Δ_e,q) e^iN ϕ_q
,
where
g_ion,ex^B_1(Δ_e,q) = - 2^5/2/πΔ_e^2(q^2+1)^2 ,
denotes the electronic exchange amplitude in the absence of the laser field, that is in
agreement to the Born-Ochkur approximation <cit.>, and Δ_e
represents the amplitude of the momentum transfer vector from the incident to the ejected
electron, Δ_e = 𝐤_i- 𝐤_e.
Similarly to the direct scattering, the first-order atomic transition amplitude for the
exchange scattering can be expressed as
T^(1)_N,ex = - α_0ω/2[ J_N-1( R_q) M_at,ex^(1) ( ω ) e^i (N-1) ϕ_q+
J_N+1( R_q) M_at,ex^(1) ( -ω ) e^ i (N+1) ϕ_q] ,
where
M_at,ex^(1)(±ω) =
- ε·𝐪̂/2^3/2π^3Δ_e^2 J_101(±ω, q )
,
and J_101(±ω, q ) is calculated from Eq. (<ref>).
Obviously, the exchange effects for both electronic and atomic contributions to the
transition amplitude vary like Δ_e^-2, and cannot be neglected in comparison to
the contribution of the direct scattering channel if k_e and k_f are of
comparable order of magnitude.
In contrast, for very fast incident and scattered electrons, with k_i and k_f
much larger than the atomic unit, and slow ejected electrons, k_e ≪ k_f, the
electronic and atomic exchange terms can be neglected compared to the corresponding direct
terms.
§.§ The low-photon energy approximation
In the low-photon energy limit where the photon energy is small compared to the
ionization energy of the hydrogen atom (typically in the infrared region), it is worth
presenting some useful simple approximation formulas for the atomic transition amplitude.
In most theoretical works the analytical calculations cannot be done exactly and, as
the photon energy remains small is expected that only few intermediate bound states to
contribute to the atomic transition amplitude and thus to approximate the complicated
analytical formulas.
This is the key of the closure approximation method <cit.>, which consists in
replacing the difference energy E_n-E_1 by an average excitation energy
ω̅≃
4/9 a.u. for the hydrogen atom in approximating the sum over the intermediate states in
the atomic transition amplitudes.
Here we present a different approach based on the low-frequency approximation (LFA),
given by the lowest-order term of the expansion of the atomic matrix element M_at^(1) ( ω ) in powers of the laser photon energy.
After some algebra we derive an approximate formula for the atomic radial integral,
J_101, in the low-photon energy limit ω≪ |E_1| in Eq.
(<ref>), in the first order in ω,
J_101(ω,p) ≃ - 16 p/(p^2 +1)^3( 1 - ω/2 p^2 -9/p^2 +1),
where p= q or k_e, and, therefore the atomic transition amplitude for the direct
process, Eq. (<ref>), in the low-photon energy limit reads as
T^(1)_N,d ≃ α_0ω2^3/2e^i Nϕ_q/π^3Δ^2{ J_N-1( R_q) e^-i ϕ_q[ ε·𝐪/(q^2 +1)^3 +
ε·𝐤_e/(k_e^2 +1)^3].
+ J_N+1( R_q) e^i ϕ_q[ ε^* ·𝐪/(q^2 +1)^3 +
ε^* ·𝐤_e/(k_e^2 +1)^3]
+ ω/2 J_N-1( R_q) e^-i ϕ_q[
ε·𝐪 (q^2-9)/(q^2 +1)^4 +
ε·𝐤_e (k_e^2-9)/(k_e^2 +1)^4]
.
- ω/2 J_N+1( R_q) e^i ϕ_q[ ε^* ·𝐪 (q^2-9)/(q^2 +1)^4 +
ε^* ·𝐤_e ( k_e^2-9)/(k_e^2 +1)^4]
}.
For a LP laser field the following formula holds,
J_N( R_q) = J_N(α_0 ε·𝐪) e^-i
Nϕ_q, and we obtain from Eqs. (<ref>) and (<ref>) the
direct electronic transition amplitude,
T^(0), LP_N,d =
2^1/2/π^3Δ^2
J_N(α_0 ε·𝐪)
[1/(q^2+1)^2 - 1/(k_e^2+1)^2],
and the direct atomic transition amplitude, Eq. (<ref>), simplifies to
T^(1), LP_N,d ≃ 2^5/2/π^3ω/Δ^2{
N J_N(α_0 ε·𝐪)
[ 1/(q^2 +1)^3 +
ε·𝐤_e/ε·𝐪1/(k_e^2 +1)^3]
.
+ . α_0ω/2
J_N^' (α_0 ε·𝐪) [
ε·𝐪 (q^2-9)/(q^2 +1)^4 +
ε·𝐤_e (k_e^2-9)/(k_e^2 +1)^4]
},
where we have used the recurrence relation J_N-1 (x) +J_N+1 (x)= J_N(x)(2N /x),
and J_N^' is the first derivative of the Bessel function which satisfies the
relation J_N^' (x) = [J_N-1 (x) -J_N+1 (x)]/2, with x= α_0 (
ε·𝐪), <cit.>.
If we consider the lowest order in the photon energy ω in Eq. (<ref>)
T^(1), LP_N,d≃2^5/2/π^3 Nω/Δ^2
J_N(α_0 ε·𝐪)
[ 1/(q^2 +1)^3 +
ε·𝐤_e/ε·𝐪1 /(k_e^2 +1)^3] ,
we obtain the LFA formula for N-photon absorption atomic transition element in the
case of a LP field.
However, in the limit ω→ 0 for scattering parameters such that R_q ≫ 1, i.e there is a strong coupling between the projectile and ejected electrons
and the laser field, the transition amplitudes derived in the semi-perturbative approach
do not diverge, but approach the zero value, due to asymptotic behavior of the Bessel
function of the first kind <cit.> at large arguments,
J_N(x)≃√(2/π x)cos (x-N π/2 -π/4), for x →∞.
Furthermore, whenever the condition R_q ≪ 1 is satisfied, i.e. the
perturbative regime of low laser intensities where α_0 ≪ 1 a.u. and/or
scattering kinematics with |ε·𝐪|≪ 1 a.u., we can use
the approximate formula for the Bessel function at small arguments,
J_N ( R_q) ≃1/ N!( R_q/2)^N,
for N > 0,
and J_N( R_q)= (-1)^-N J_-N( R_q), for N ≤ 0 <cit.>.
Thus, in the perturbative region with R_q ≪ 1, we obtain from Eq.
(<ref>) a simple formula for the direct electronic transition amplitude for
N-photon absorption (N > 0), as
T^(0), LP_N,d≃α_0^N2^1/2/π^3Δ^2 N !( ε·𝐪/2)^N[1/(q^2+1)^2 - 1/(k_e^2+1)^2],
while for the direct atomic transition amplitude for N-photon absorption in
the LFA we obtain from Eq. (<ref>)
T^(1), LP_N,d≃α_0^N2^5/2 N ω/π^3Δ^2 N ! ( ε·𝐪/2)^N[ 1/(q^2 +1)^3 +
ε·𝐤_e/ε·𝐪1/(k_e^2 +1)^3].
Expressions similar to Eqs. (<ref>)-(<ref>) and
Eqs. (<ref>)-(<ref>) can be easily derived for the exchange scattering
channel.
As expected, the electronic and atomic transition amplitudes are important at scattering
and ejected angles where the momenta q, Δ, and Δ_e are small.
The ratio of the direct atomic and electronic transition amplitudes derived in the
low-photon energy limit, Eqs. (<ref>) and (<ref>),
T^(1), LP_N,d/T^(0), LP_N,d≃4Nω/q^2+1[1 +
(q^2 +1)^3/(k_e^2 +1)^3ε·𝐤_e/ε·𝐪]
[ 1- (q^2+1)^2/(k_e^2+1)^2]^-1,
shows that, compared to the projectile electron contribution, the first-order atomic
dressing effects for ω≪ 1 a.u. and laser parameters such that R_q ≪
1, are increasing with the net number of exchanged photons, N, and photon energy,
ω, and are decreasing with the momenta of the ejected electron, k_e, and
residual ion, q.
Obviously, Eq. (<ref>) shows that differences occur in the TDCSs for
absorption or emission of N photons, which correspond to positive or negatives values
of N, due to constructive or destructive interferences of the electronic and atomic
terms in TDCS.
In the case of one-photon absorption (N=1) in the perturbative regime with R_q
≪ 1 and low photon energies we can use the approximate formula for the Bessel
function,
Eq. (<ref>), and by keeping only the first order in laser field intensity, I, we
obtain simple formulas for the direct electronic transition amplitude, Eq. (<ref>),
T^(0)_N= 1,d≃√(I)/2^1/2π^3ε·𝐪/ω^2 Δ^2[1/(q^2+1)^2 - 1/(k_e^2+1)^2]
,
as well for the direct atomic transition amplitude derived in the low-photon energy
limit, Eq. (<ref>),
T^(1)_N= 1,d≃ 2^3/2/π^3√(I)/ωΔ^2[
ε·𝐪/(q^2 +1)^3( 1 + ω/2 q^2-9/q^2 +1)
+
ε·𝐤_e/(k_e^2 +1)^3( 1 + ω/2 k_e^2-9/k_e^2 +1)
]
.
Moreover, if we keep the lowest order in the photon energy in Eq. (<ref>) we
obtain a quite simple formula for the direct atomic transition amplitude
T^(1)_N= 1,d≃ 2^3/2/π^3√(I)/ωΔ^2[
ε·𝐪/(q^2 +1)^3
+
ε·𝐤_e/(k_e^2 +1)^3],
in the LFA for one-photon absorption in the perturbative regime.
Similarly, for the exchange scattering we derive simple approximate formulas for the
electronic and atomic transition amplitudes at R_q ≪ 1, Eqs. (<ref>)
and (<ref>) in the low-photon energy limit, as
T^(0)_N= 1,ex≃ 1 /2^1/2π^3√(I)/ω^2 Δ_e^2ε·𝐪/(q^2+1)^2
,
T^(1)_N= 1,ex≃ 2^3/2/π^3√(I)/ωΔ_e^2ε·𝐪/(q^2 +1)^3( 1 + ω/2 q^2-9/q^2 +1).
The infrared divergence in the limit ω→ 0 is evident in all the above
electronic and atomic transition amplitude expressions derived at R_q ≪ 1.
Thus, in the perturbative regime and low-photon energy approximation the electronic
transition amplitude varies like ω^-2, while the atomic transition amplitude
varies like ω^-1, which is reminiscent of the infrared divergence of quantum
electrodynamics <cit.> and Low theorem <cit.> in the limit ω→ 0.
Clearly, these simple analytical formulas we have derived for one-photon absorption, as
well for nonlinear atomic transition amplitudes might provide more physical insight into
the laser-assisted (e,2e) reactions.
§.§ The triple differential cross section
It is well known that the TDCS can provide useful information about collision dynamics
in the electron-impact ionization process <cit.>. For laser-assisted (e,2e)
collisions accompanied by the transfer of N photons, we calculate the nonlinear TDCS in
the first-order Born approximation in the scattering potential, for unpolarized incident
projectile and hydrogen beams, and without distinguishing between the final spin states of
the electrons,
d^3σ_N^B1/ dΩ_f dΩ_e d E_f =
(2π)^4 k_f k_e/k_i( 1/4| T_N,d + T_N,ex|^2 + 3/4| T_N,d - T_N,ex|^2 )
,
averaged over the initial spin states and summed over the final spin states.
The projectile electrons are scattered into the solid angle Ω_f and
Ω_f +dΩ_f with the kinetic energy between E_f and E_f+dE_f, and the
ejected electrons are emitted within the solid angle Ω_e and Ω_e
+dΩ_e.
The TDCS is a function of the electrons momentum vectors 𝐤_i, 𝐤_f,
and 𝐤_e, and depends on the laser parameters: intensity I, photon energy
ω, and polarization ε.
The dominant contribution to TDCS is due to collisions involving small momentum transfers
Δ and Δ_e, small momentum of the residual ion q, or near resonance
photon energies.
The TDCS for the laser-assisted (e,2e) process is given by
d^3σ^B1/ dΩ_f dΩ_e d E_f =
∑_N=N_min^+∞d^3σ_N^B1/ dΩ_f dΩ_e d E_f .
By integrating TDCS over the direction of the scattered electrons, Ω_f, we
obtain the double differential cross section of the ejected electrons, while by
integrating TDCS over the direction of the ejected electrons, Ω_e, we derive
the double differential cross section of the scattered electrons. Finally, the total
ionization cross section is deduced by integrating over the angles and energies of
the scattered and ejected electrons.
By neglecting the atomic dressing in Eq. (<ref>), namely T_N,d^(1)≃ 0
and T_N,ex^(1)≃ 0, at small momentum of the residual ion, q
≪ k_e, we obtain a simple formula for the laser-assisted TDCS,
d^3σ_N^B1/ dΩ_f dΩ_e d E_f≃k_f k_e/k_i |J_N( R_q)|^2
4/Δ^4 ( 1 - Δ^2/Δ_e^2 + Δ^4/Δ_e^4) |ψ_1s^(0)(q)|^2
,
that “decouples” into a product of three factors: (i) the squared Bessel function which
includes the laser-projectile and ejected electrons interaction, (ii)
the electron-electron collision factor in the first-order Born approximation
f_ee^B1 = 1/4π^4 Δ^4( 1 - Δ^2/Δ_e^2 +
Δ^4/Δ_e^4),
that is the absolute square of the half-off-shell Coulomb-matrix element summed and
averaged over final and initial spin states for fast projectile and outgoing electrons
<cit.>,
and (iii) |ψ_1s^(0)(q)|^2 =8 π^-2 (q^2+1)^-4 that represents the squared
momentum-space wave function for the ground state of atomic hydrogen <cit.>.
Equation (<ref>) is in agreement to the TDCS derived for EMS by Kouzakov and
coworkers, namely Eq. (26) in Ref. <cit.>.
The half-off-shell Mott scattering TDCS, for fast projectile and outgoing electrons that
includes the exchange terms, <cit.>, is simply calculated as
(2π)^4f_ee^B1,
( dσ/ dΩ_e)_ee = 4/Δ^4 ( 1 - Δ^2/Δ_e^2 + Δ^4/Δ_e^4)
.
If we take into account the atomic dressing in Eq. (<ref>) in the low-photon energy
limit ω≪ |E_1|, and consider the lowest order in the photon energy in Eq.
(<ref>), at small momentum of the residual ion, q ≪ k_e, we obtain
d^3σ_N^B1/ dΩ_f dΩ_e d E_f≃k_f k_e/k_i |J_N( R_q)|^2
( dσ/ dΩ_e)_ee(1+ 4 N ω/q^2+1) ^2 |ψ_1s^(0)(q)|^2
,
that is in agreement to the laser-assisted TDCS derived in the low-photon energy
approximation for EMS by Bulychev and coworkers, namely Eqs. (9)-(11) in Ref.
<cit.>.
In contrast to Eq. (<ref>) in which the atomic dressing effects are neglected,
the TDCS Eq. (<ref>) does not obey the well-known Kroll-Watson sum rule
<cit.>.
Obviously, the TDCS in the laser-assisted (e,2e) collisions provides valuable
information about the collision dynamics <cit.>, electronic structure of the
target, and can be used to derive the momentum density distribution of the target
electron, which was first demonstrated for hydrogen and helium atoms
<cit.>.
§ NUMERICAL EXAMPLES AND DISCUSSION
In this section we present our numerical results for the laser-assisted electron-impact
ionizing collisions in hydrogen, described by Eq. (<ref>), for
fast incident and outgoing electrons, and we apply the semi-perturbative formulas derived
in Sec. <ref> to calculate the nonlinear TDCSs in the presence of a LP
laser field.
Obviously, due to the complicated analytical form of the laser-dressed atomic wave
function, the total scattering amplitude has to be numerically evaluated.
It is worth pointing out that the electronic and atomic transition amplitudes, Eqs.
(<ref>), (<ref>), (<ref>), and (<ref>), as well their
approximations derived in Subsec. <ref> are applicable for arbitrary scattering
configurations and laser field polarizations.
We study the laser-assisted (e,2e) process in the coplanar geometry depicted in Fig.
<ref>, in which the momenta of the electrons, 𝐤_i, 𝐤_f, and
𝐤_e, lie in the same plane where the two outgoing electrons are detected in
coincidence at the scattering angles θ_f and θ_e, with equal
corresponding azimuthal angles φ_f=φ_e=φ_i.
The momentum vector of the incident electron, 𝐤_i, is taken parallel to the
z axis, with θ_i=0^∘ and φ_i=0^∘, and the scattering angle
θ_f of the scattered electron is fixed, while the angle θ_e of the ejected
electron is varied.
The asymmetric scattering geometry is considered in which θ_f ≠θ_e and
k_f ≠k_e. At this point it is useful to recall the differences between
the symmetric and asymmetric scattering geometries, namely the symmetric geometry is
defined by the requirement that the scattering angles and energies of the scattered and
ejected electrons are equal.
In a kinematically complete experiment by measuring the momentum vectors of both ejected
electron and ionized target, 𝐤_e and 𝐪, we can deduce the
momentum of the scattered electron, 𝐤_f = 𝐤_i - 𝐤_e -
𝐪, as well as the momentum transfer of the scattered electron, Δ=
𝐤_i- 𝐤_f, occurring during the collision <cit.>.
Thus, from the energy conservation law, the final momentum of the projectile is given by
k_f = ( k_i^2 - k_e^2+2E_1 + 2N ω) ^1/2, while the momentum transfer of
the projectile is simple calculated as
Δ = ( k_i^2 + k_f^2 -2 k_i k_f cosθ_f )^1/2.
The Cartesian components of the momentum transfer vector, Δ, are given
by (-k_f sinθ_f, 0, k_i -k_f cosθ_f ) and the amplitude Δ
varies in the range |k_i -k_f| ≤Δ≤ k_i +k_f, for forward
θ_f=0^∘ and backward θ_f=180^∘ scattering, respectively.
Similarly, the amplitude of the momentum transfer vector Δ_e is
calculated as Δ_e = ( k_i^2 + k_e^2 -2 k_i k_e cosθ_e )^1/2.
The amplitude of the recoil momentum vector of the residual ion, q, is given by
q = [ Δ^2 + k_e^2 -2 k_i k_e cosθ_e + 2 k_f k_e cos (θ_f -
θ_e ) ]^1/2.
The argument R_q of the Bessel functions is calculated as
R_q^2= R_i^2 + R_f^2 + R_e^2
- 2 R_i R_fcos (ϕ_i- ϕ_f)
- 2 R_i R_ecos (ϕ_i- ϕ_e)
+ 2 R_f R_ecos (ϕ_f - ϕ_e ) ,
where R_s= α_0|ε·𝐤_s | and
e^i ϕ_s = ε·𝐤_s / |ε·𝐤_s |, with s=i,f, and e.
For a LP laser field the dependence of R_s on the laser polarization is given
by
R_s= α_0 |𝐞_j·𝐤_s | and ϕ_s= nπ,
while for a CP field with the polarization unit vector
ε= ( 𝐞_j + i 𝐞_l ) /√(2),
we obtain
R_s= (α_0/√(2))√((𝐞_j·𝐤_s )^2
+(𝐞_l·𝐤_s )^2) and
ϕ_s=arctan(𝐞_l·𝐤_s )/(𝐞_j·𝐤_s )
+nπ, where n is an integer.
In our numerical calculation we consider that the laser field is linearly polarized in
the same direction along the momentum vector of the incident electron,
ε || 𝐤_i.
Specifically, for a LP laser field and a coplanar scattering geometry with ϕ_i =
ϕ_f =ϕ_e =0^∘ the argument of the Bessel function simplifies to
R_q= α_0 ( k_i - k_f cosθ_f -k_e cosθ_e )^1/2.
To start with, we have checked that the numerical results of TDCSs for the (e, 2e)
scattering of fast electrons by hydrogen atoms in their ground state are in agreement with
earlier numerical data published in the literature.
A very good agreement is obtained with the numerical results of TDCS for one- and
two-photon exchange presented in Fig. 1 of Ref. <cit.> and Figs. 1 and 2 of
Ref. <cit.>, under the kinematical conditions of EMS (small momentum of the
residual ion q and large momentum transfers Δ and Δ_e), for incident
electrons of kinetic energy E_i=2013.6 eV, in a noncoplanar symmetric scattering
geometry, and a LP laser of intensity 4 × 10^12 W/cm^2, calculated in the
low-frequency approximation at ω =1.17 eV.
At an incident electron kinetic energy E_i=500 eV, in a coplanar symmetric
geometry, εΔ, and a LP laser of intensities I=1.3 ×
10^7 W/cm^2, 10^2 × I, 10^4 × I, and 10^6 × I, the behavior of the
TDCS calculated from Eq. (<ref>) is in fair agreement, up to a scaling factor, to the
first-order Born calculation of TDCS for the ionization of hydrogen shown in Figs. 2 and 3
of Ref. <cit.>.
Since the atomic wave function was calculated within the closure
approximation <cit.>, our numerical results disagree at larger photon energies
ω>3 eV where the atomic dressing effect is more important, and cannot be accurately
described by this approximation.
At the resonance photon energy of 10.2 eV, laser intensity of 1.3 × 10^7 W/cm^2, and polarization ε|| k_i, in the Ehrhardt asymmetric
coplanar geometry, with the incident and ejected electrons kinetic energies E_i=250 eV
and E_e=5 eV, and scattering angle θ_f= 3^∘, the TDCS given by Eq.
(<ref>) is in an satisfactory agreement with the first-Born TDCS for the ionization
of hydrogen plotted in Fig. 3(a) of Ref. <cit.> where the atomic wave function
is calculated using a Coulomb-Sturmian basis. Obviously, despite the low value of the
ejected electron kinetic energy, the agreement is due to the fact that for one-photon
resonance the TDCS is dominated by the atomic contribution due to 1s-2p excitation.
Now, we return our discussion to the scattering geometry depicted in Fig. <ref>
where the laser polarization, ε, is parallel to the incident
electron momentum direction, 𝐤_i, and the outgoing electrons move
asymmetrically with respect to the direction of the incident electron, with different
scattering and ejected angles, and different kinetic energies.
We have chosen high kinetic energies of the projectile and ejected electrons (compared
to the atomic scale), moderate laser intensities below 1 TW/cm^2 which correspond to
electric field strengths lower than 2.7 × 10^7 V/cm, and have considered photon
energies below the ionization threshold of the hydrogen atom.
Specifically, a laser intensity of 1 TW/cm^2 and a photon energy of 1.55 eV
(Ti:sapphire laser) result in a quiver motion amplitude α_0≃ 1.64 a.u. and
an argument of the ordinary Bessel function
R_q ≃ 1.64|ε·𝐪|,
while for a larger photon energy of 3.1 eV (Ti:sapphire second harmonic) the
corresponding amplitude α_0 and the argument R_q are about 4 times
smaller.
The numerical results obtained for TDSCs in the first-order Born approximation in the
scattering potential, Eq. (<ref>), are compared with those obtained by considering
the atomic contribution in the LFA, Eq. (<ref>), and those obtained by
neglecting the dressing of the target by setting T_N,d^(1)≃ 0 and
T_N,ex^(1)≃ 0 in Eq. (<ref>).
In Fig. <ref> we present the TDCSs as a function of the angle of the ejected
electron, θ_e, with exchange of one photon, N = 1, at high kinetic energies of
the projectile electron E_i=2 keV and ejected electron E_e=200 eV, and a small
scattering angle, θ_f = 5^∘.
The laser intensity is I=1 TW/cm^2, while the photon energies we consider are:
1.55 eV in Fig. <ref>(a), 3.1 eV in Fig. <ref>(b), 4.65 eV in Fig.
<ref>(c), and 9.3 eV in Fig. <ref>(d).
Figure <ref> show similar results to Fig. <ref>, but for a larger scattering
angle θ_f = 15^∘.
In all figures the solid lines correspond to the laser-assisted TDCSs calculated from
Eq. (<ref>), which include the laser dressing effects of the projectile and of the
hydrogen atom, the dot-dashed lines correspond to the TDCSs in which the atomic dressing
is considered in the LFA for the direct and as well as exchange scattering, while the
dashed lines correspond to the results in which the atomic dressing is neglected.
As resulted from our theoretical calculations, the TDCS is quite important at scattering
and ejected angles where the recoil momentum q is small.
Thus, at the scattering angle θ_f = 5^∘ the angular distribution of the
electrons is observed with a highest probability at the maximum values of TDCSs, which
occur at the following detection angles θ_e ≃ -61^∘ in Fig.
<ref>(a), θ_e ≃ -40^∘ in Fig. <ref>(b), θ_e ≃
-38^∘ in Fig. <ref>(c), and θ_e ≃ -39^∘ in Fig. <ref>(d).
Similar to the free-free transitions or other laser-assisted processes
<cit.>, the net effect of the laser field is to decrease
the peak values of the angular distributions of TDCSs, while the atomic dressing
contribution is increasing with photon energy.
The dressing effect of the laser is included in the argument of the Bessel
function through the quiver motion amplitude, α_0, in the electronic transition
amplitudes, Eqs. (<ref>) and (<ref>), as well through R_q and the
factor α_0 ω in the atomic transition amplitudes, Eqs. (<ref>) and
(<ref>).
Thus, as the photon energy increases the atomic dressing effects (included in the full
lines) are more important than the electronic dressing effects (included in the dashed
lines), and the TDCS decreases as suggested by Eqs. (<ref>), (<ref>),
and (<ref>) derived in the low-photon energy limit.
At low photon energies which are far from any atomic resonance, of 1.55
eV or even 3.1 eV at θ_f = 5^∘, the laser-assisted (e,2e) process
depicted in Figs. <ref>(a)-<ref>(b) and <ref>(a) is well described by the
LFA (dot-dashed lines), as long as the photon energy is much smaller than |E_1|.
A clear signature of the nonperturbative effect of the laser is the oscillatory
character of the angular distribution of TDCS, as shown in Fig. <ref>(a) compared to
Figs. <ref>(b)-<ref>(d).
The nonperturbative behavior, due to a larger quiver amplitude, is seen at the small
photon energy of 1.55 eV (α_0≃ 1.64 a.u. and U_p ≃ 0.03
eV), and resides in the occurrence of an increasing number of zeros in the Bessel
functions of the first kind and, therefore, in the TDCS <cit.>.
Thus, kinematic minima of TDCSs appear at R_q = 0 if the scalar product
ε·𝐪 =0, condition that is fulfilled at ejected electron angles
given by the relation cosθ_e = (k_i -k_fcosθ_f)/k_e.
The first two kinematical minima which are located on the left- and right-side of the
main maximum in Fig. <ref>(a) at the photon energy of 1.55 eV, occur at the
angles θ_e ≃ - 79^∘ and 80^∘, while the next minima of TDCS
are due to the zeros of the Bessel function J_1( R_q) with R_q ≠ 0.
At larger photon energies in Figs. <ref>(b)-<ref>(d) the
first two kinematical minima of the TDCSs occur at the angles θ_e ≃ -
79^∘ in Figs. <ref>(b)-<ref>(d), and at θ_e ≃ 80^∘
in Fig. <ref>(b), θ_e ≃ 81^∘ in Fig. <ref>(c), and θ_e ≃ 83^∘ in Fig. <ref>(d).
In Fig. <ref> we show the numerical results for TDCSs plotted in a logarithmic scale
for N=0, 1 and 2, at ω =3.1 eV and the scattering angles θ_f =
5^∘ in Fig. <ref>(a) and θ_f = 15^∘ in Fig. <ref>(b), with
the same parameters as in Figs. <ref>(b) and <ref>(b).
The angular distributions of TDCS at different N present similar features with
different magnitudes, and show that the net effect of the laser field is to decrease the
values of TDCSs and to split the peaks which occur at N=0 (full lines) at θ_e
≃ - 62^∘ in Fig. <ref>(a) and θ_e ≃ - 71^∘ in Fig.
<ref>(b).
The splitting of the peaks by the kinematical minima, which is a well known signature of
the laser field on the TDCSs, appears due to cancellation of the scalar product
ε·𝐪 and is located almost symmetrically with respect to
the direction of the incident electron at the ejected angles θ_e ≃ -
79^∘ and 80^∘ in Fig. <ref>(a) and θ_e ≃ - 74^∘
and 75^∘ in Fig. <ref>(b).
It is well known the projectile electron plays a major role in the scattering process
since it interacts with the atomic target electron (a repulsive interaction), its nucleus
(an attractive interaction), as well as with the laser field.
In Fig. <ref> we present the TDCSs for the ionization of hydrogen by electron impact
in the presence of a LP laser field, for absorption of one photon N=1, at a photon
energy of 4.65 eV as a function of the ejected electron angle.
The kinetic energy of the ejected electron is E_e=100 eV in Fig. <ref>(a), 200
eV in Fig. <ref>(b), 400 eV in Fig. <ref>(c), and 800 eV in Fig.
<ref>(d).
The other parameters concerning the scattering geometry, incident projectile energy,
angle of the scattered electron, and laser field intensity are the same as in Fig.
<ref>.
Figure <ref> shows similar results as in Fig. <ref>, but for a larger
scattering angle θ_f = 15^∘. Clearly, the electronic contribution (dashed
lines) underestimates the angular distribution of TDCS.
For kinetic energies of the ejected electron E_e ≤ 100 eV at θ_f =
5^∘ and E_e ≤ 200 eV at θ_f =15^∘, the atomic dressing effects
are quite important, and the TDCS calculated in the LFA (dot-dashed
lines) fails to describe accurately the laser-assisted (e,2e) process.
As the kinetic energy of the ejected electron increases to 800 eV, at small scattering
angles the atomic dressing effects are less important than the electronic
dressing effects, as it is shown in Figs. <ref>(d) and <ref>(d).
As we approach the symmetric coplanar case of scattered and ejected electrons of
equal energies, E_f ≃ E_e ≃ (E_i+E_1+ω)/2, the minimum of the recoil
momentum amplitude q occurs now at larger angles close to θ_f
≃ -θ_e ≃ 45^∘, Eq. (<ref>).
In order clarify the importance of the atomic dressing term we illustrate in Fig.
<ref>(a) the TDCS, in a logarithmic scale, with respect to the photon energy for
one-photon absorption.
The kinetic energies of the projectile and the ejected electrons are E_i = 2 keV and
E_e = 200 eV, while the angles of the scattered and ejected electrons are chosen
θ_f = 15^∘ and θ_e = -55^∘.
The polarization vector of the electric field is parallel to the momentum of the
incident electron, and we consider a moderate laser intensity, I=1 TW/cm^2, for which
the non-perturbative dressing effects of the projectile and
ejected electrons can be visualized at small photon energies with α_0 > 1.
The solid line corresponds to the laser-assisted TDCS calculated from Eq. (<ref>),
which includes the dressing effects of the projectile and of the atomic target, the
dashed line corresponds to TDCS in which the atomic dressing terms are
neglected, while the dot-dashed line corresponds to the result in which the atomic
dressing terms are considered in the LFA, Eq. (<ref>).
The TDCS shows a strongly dependence on the atomic structure of the target and exhibits
a series of resonance peaks which are associated with one-photon absorption from the
initial ground state of the hydrogen atom, at photon energies that match atomic
resonances ω= E_n-E_1, and correspond to poles that occur in the atomic radial
integral at τ =n, with n ≥ 2, as detailed in Fig. <ref>(b).
The noteworthy feature of the atomic radial integral J_101, Eq. (<ref>), is
that it presents poles with respect to τ which arise due to the cancellation of the
2- τ factor in the denominator, as well as from the poles of the Appell's
hypergeometric functions F_1 at τ=n', where n' ≥ 3 is an integer. The origin of
these poles resides in the poles of the Coulomb Green's functions used for the calculation
of the linear-response vector w_100, <cit.>.
Clearly, the LFA, which does not take into account the atomic structure, fails to describe
the laser-assisted (e,2e) process at large photon energies (typically in the UV range).
Figure <ref>(c) shows the energy spectra in the nonperturbative regime at low photon
energies, ω≤ 1.6 eV in Fig. <ref>(a), domain where at the laser
intensity I=1 TW/cm^2 the quiver amplitude α_0 is larger than 1 a.u. and
increases up to 395 a.u. for ω=0.1 eV.
The TDCS presents oscillations due to the Bessel function J_1( R_q), and the LFA
(dot-dashed line) gives a good description of the atomic dressing effect.
It should be kept in mind that both the laser intensity and photon energy play an
important role, and obviously, the nonperturbative effects are seen to be important as
laser intensity increases and photon energy decreases due to increasing quiver motion
of the free and bound electrons, and contribute to the oscillatory behavior of
the laser-assisted TDCSs, which resides in the occurrence of increasing oscillations of
the Bessel function.
§ SUMMARY AND CONCLUSIONS
We study the electron-impact ionization of hydrogen at large projectile and ejected
electron kinetic energies in the presence of a linearly polarized laser field, and
investigate the laser-assisted (e, 2e) reaction at moderate laser field intensities.
We focus our numerical results on the case of the asymmetric coplanar scattering
geometry, where
we discuss the importance of the dressing effects and analyze the influence of the
laser field on the TDCS in several numerical examples.
The laser-assisted (e, 2e) reaction has a nonlinear character which consists in
multiphoton absorption (emission) of photons from (to) laser radiation by projectile and
ejected electrons and atomic target.
We present a new method to calculate the atomic radial amplitude in a closed form,
which represents the main difficulty in the evaluation of TDCS.
Thus, a semi-perturbative approach is used, in which for the interaction of the fast
incident and outgoing electrons with the laser field we employ non-perturbative
Gordon-Volkov wave functions, while the interaction of the hydrogen atom with the laser
field is considered in first-order TDPT, and the interaction of the fast incident
electron with the hydrogen atom is treated in the first-order Born approximation.
The exchange between the outgoing electrons can not be ignored when ejected electrons
with large kinetic energy are detected, and is included in the calculation.
Our theoretical formulas and numerical results clearly demonstrate the strong influence of
the photon energy and laser intensity on the dynamics of laser-assisted (e, 2e)
process.
It was found that the atomic dressing contribution calculated in first-order TDPT in the
laser field substantially modifies the laser-assisted TDCSs at small momenta q, Δ, and Δ_e, and for photon energies close to resonances.
The introduction of the laser field in the (e,2e) reaction changes the profile of TDCS
as it is seen in Fig. <ref>, where the peaks of TDCSs are reduced in magnitude and
splitted by the presence of the laser, due to appearance of the kinematical minima.
We show that the atomic dressing effects strongly depend on the structure of the atomic
target as is seen in Fig. <ref>, and cannot be correctly described by the LFA at
large photon energies.
At low photon energies we confirm the validity of LFA by comparing the numerical
results obtained for the atomic matrix elements within LFA with the results obtained by
first-order TDPT.
Thus, the theoretical studies remain very useful for understanding essential
details of the scattering signal due to the fact that the derived analytical formulas have
the advantage of giving more physical insight into the laser-assisted (e, 2e) process
and valuable information in future theoretical and experimental investigations.
§ ACKNOWLEDGMENTS
The work by G. B. was supported by the research program PN 19 15 01 02 through
Contract No. 4N/2019 (Laplas VI) from the UEFISCDI and the Ministry of Research,
Innovation, and Digitization of Romania.
99
b-j89
F. W. Byron Jr., and C. J. Joachain, Phys. Rep. 179, 211-272 (1989).
whelan93
(e, 2e) & Related Process, edited by C. T. Whelan, H. R. J. Walters,
A. Lahmam-Bennani, and H. Ehrhardt, NATO ASI Series, vol 414 (Series C: Mathematical and
Physical Sciences), (Springer, Dordrecht, 1993);
Coincidence Studies of Electron and Photon Impact Ionization,
edited by C. T. Whelan and H. R. J. Walters (Springer Science+Business Media, LLC, New
York, 1997).
camilloni72
R. Camilloni, A. Giardini Guidoni, R. Tiribelli, and G. Stefani,
Phys. Rev. Lett. 29, 618 (1972).
weigold99
E. Weigold and I. E. McCarthy,
Electron Momentum Spectroscopy (Kluwer, New York, 1999).
smirnov99
V. G. Neudatchin, Yu. V. Popov, and Yu. F. Smirnov, Phys. Usp. 42, 1017 (1999).
coplan94
M. A. Coplan, J. H. Moore, and J. P. Doering, Rev. Mod. Phys. 66, 985 (1994).
ehlotzky98
F. Ehlotzky, A. Jaro, and J. Z. Kaminski, Phys. Rep. 297, 63 (1998).
hohr2007
C. Höhr , A. Dorn, B. Najjari, D. Fischer, C.D. Schröter, and J. Ullrich,
Phys. Rev. Lett. 94, 153201 (2005);
J. Electron Spectrosc. Relat. Phenom. 161, 172 (2007).
hiroi2021 T. Hiroi, Y. Morimoto, R. Kanya, and K. Yamanouchi,
Phys. Rev. A 104, 062812 (2021).
jain78 M. Jain and N. Tzoar, Phys. Rev. A 18, 538, (1978).
banerji81 J. Banerji and M. H. Mittleman, J. Phys. B 14, 3717 (1981).
cavaliere80-81P. Cavaliere, G. Ferrante, and C. Leone,
J. Phys. B 13, 4495 (1980); P. Cavaliere, C. Leone, R. Zangara, and G. Ferrante,
Phys. Rev. A 24, 910 (1981).
gordon V. Gordon, Z. Physik 40, 117 (1926).
volkov D. M. Volkov, Z. Physik 94, 250 (1935).
joachain88 C. J. Joachain, P. Francken, A. Maquet, P. Martin, and V.
Véniard, Phys. Rev. Lett. 61, 165 (1988).
b-jF. W. Byron Jr. and C. J. Joachain, J. Phys. B 17, L295 (1984).
ehrhardt H. Ehrhardt, K. Jung, G. Knoth, and P. Schlemmer,
Z. Phys. D 1, 3 (1986).
martin89 P. Martin, V. Véniard, A. Maquet, P. Francken, and C. J. Joachain,
Phys. Rev. A 39, 6178 (1989).
taieb1991 R. Taïeb, V. Véniard, A. Maquet, S. Vučić, and R.
M. Potvliege, J. Phys. B 24, 3229 (1991).
cionga93 A. Cionga, V. Florescu, A. Maquet, and R. Taïeb,
Phys. Rev. A 47, 1830 (1993).
ajana2019 A. Makhoute, D. Khalil, and I. Ajana, Atoms 7, 40 (2019).
kouzakov2010 K. A. Kouzakov, Y. V. Popov, and M. Takahashi,
Phys. Rev. A 82, 023410 (2010).
bulychev2012 A. A. Bulychev, K. A. Kouzakov, and Y. V. Popov,
Phys. Lett. A 376, 484 (2012).
khalil2017 D. Khalil, M. Tlidi, A. Makhoute, and I. Ajana,
J. Phys. B: At. Mol. Opt. Phys. 50, 078001 (2017).
keldysh65L. V. Keldysh, Sov. Phys. JETP 20, 1307 (1965).
faisal73 F.H.M. Faisal, J. Phys. B: Atom. Molec. Phys. 6, L89 (1973).
reiss80H. R. Reiss, Phys. Rev. A 22, 1786 (1980).
bransden B. H. Bransden and C. J. Joachain,
Physics of Atoms and Molecules (Longman, London, 1983).
joa2012 C. J. Joachain, N. J. Kylstra, and R. M. Potvliege,
Atoms in Intense Laser Fields (Cambridge University Press, Cambridge, 2012), p.
466.
ehl1998 F. Ehlotzky, A. Jaroń, and J. Z. Kamiński,
Phys. Rep. 297, 63 (1998).
Lohmann81 B. Lohmann and E. Weigold, Phys. Lett. A, 86, 139 (1981).
leone89 C. Leone, S. Bivona, R. Burlon, F. Morales, and G. Ferrante,
Phys. Rev. A 40, 1828 (1989).
taj2004Y. Attaourti and S. Taj, Phys. Rev. A 69, 063411 (2004).
zhang2007 J. Zhang and T. Nakajima, Phys. Rev. A 75, 043403 (2007).
vf1 V. Florescu and T. Marian, Phys. Rev. A 34, 4641 (1986).
massey
N. F. Mott and H. S. W. Massey,
The Theory of Atomic Collisions (Oxford University Press, London, 1965);
C. Joachain, Quantum Collision Theory (North-Holland, Amsterdam, 1987).
Watson G. N. Watson,
Theory of Bessel Functions (Cambridge University Press, Cambridge, 1962).
acgabi2000 A. Cionga, F. Ehlotzky, and G. Zloh,
Phys. Rev. A 62, 063406 (2000);
J. Phys. B: At. Mol. Opt. Phys. 33, 4939 (2000).
gabi2015-gabi2017 G. Buica, Phys. Rev. A 92, 033421 (2015);
J. Quant. Spectrosc. Radiat. Transf. 187, 190 (2017).
weigold79 E. Weigold, C. J. Noble, S. T. Hood, and I. Fuss,
J. Phys. B: Atom. Molec. Phys. 12, 291, (1979).
dubois86
A. Dubois, A. Maquet, and S. Jetzke, Phys. Rev. A 34, 1888 (1986).
acgabi2 A. Cionga, F. Ehlotzky, and G. Zloh,
Phys. Rev. A 61, 063417 (2000).
dubois A. Dubois and A. Maquet, Phys. Rev. A 40, 4288 (1989).
ochkur V. I. Ochkur, Sov. Phys. JETP 20, 1175 (1965).
Jauch M. Jauch and F. Rohrlich,
The Theory of Electrons and Photons (Springer, New York, 1976).
Low F. E. Low, Phys. Rev. 110, 974 (1958).
takahashi2006
Y. Miyake, M. Takahashi, N. Watanabe, Y. Khajuria, Y. Udagawa, Y. Sakai, and T. Mukoyama,
Phys. Chem. Chem. Phys. 8, 3022 (2006).
k-w N. M. Kroll and K. M. Watson, Phys. Rev. A, 8, 804 (1973).
acgabi99 A. Cionga and G. Zloh, Laser Phys. 9(1), 69 (1999).
|
http://arxiv.org/abs/2306.07933v1
|
20230609154441
|
Understanding Telecom Language Through Large Language Models
|
[
"Lina Bariah",
"Hang Zou",
"Qiyang Zhao",
"Belkacem Mouhouche",
"Faouzi Bader",
"Merouane Debbah"
] |
cs.CL
|
[
"cs.CL",
"cs.AI"
] |
GPT-Calls: Enhancing Call Segmentation and Tagging by Generating Synthetic Conversations via Large Language Models
Angela Sara Cacciapuoti, Senior Member, IEEE, Jessica Illiano, Michele Viscardi, Marcello Caleffi, Senior Member, IEEE
A.S. Cacciapuoti, J. Illiano, M. Viscardi, M. Caleffi, are with the www.quantuminternet.itwww.QuantumInternet.it research group, FLY: Future Communications Laboratory, University of Naples Federico II, Naples, 80125 Italy. E-mail:mailto:[email protected]@unina.it, mailto:[email protected]@unina.it, mailto:[email protected]@unina.it, mailto:[email protected]@unina.it. Web: http://www.quantuminternet.itwww.quantuminternet.it.
Michele Viscardi acknowledges PNRR MUR project CN00000013, Marcello Caleffi acknowledges PNRR MUR project RESTART-PE00000001, Angela Sara Cacciapuoti acknowledges PNRR MUR NQSTI-PE00000023.
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The recent progress of artificial intelligence (AI) opens up new frontiers in the possibility of automating many tasks involved in Telecom networks design, implementation, and deployment. This has been further pushed forward with the evolution of generative artificial intelligence (AI), including the emergence of large language models (LLMs), which is believed to be the cornerstone toward realizing self-governed, interactive AI agents. Motivated by this, in this paper, we aim to adapt the paradigm of LLMs to the Telecom domain. In particular, we fine-tune several LLMs including BERT, distilled BERT, RoBERTa and GPT-2, to the Telecom domain languages, and demonstrate a use case for identifying the 3gpp standard working groups. We consider training the selected models on 3gpp tdoc pertinent to years 2009-2019 and predict the tdoc categories in years 2020-2023. The results demonstrate that fine-tuning BERT and RoBERTa model achieves 84.6% accuracy, while GPT-2 model achieves 83% in identifying 3GPP working groups. The distilled BERT model with around 50% less parameters achieves similar performance as others. This corroborates that fine-tuning pretrained LLM can effectively identify the categories of Telecom language. The developed framework shows a stepping stone towards realizing intent-driven and self-evolving wireless networks from Telecom languages, and paves the way for the implementation of generative AI in the Telecom domain.
Generative AI, Large Language Models, Pre-trained Transformer, Telecom Language, 3GPP
§ INTRODUCTION
In the last couple of decades, considerable efforts have been devoted to push the frontiers of wireless technologies in order to achieve kpi pertinent to latency, reliability, spectral and energy efficiencies, to name a few, through the exploitation of ai as a network orchestrator. Recently, parallel initiatives have been focused on advancing the paradigm of self-evolving networks (under several names including autonomous networks, zero-touch networks, self-optimizing/configuring/healing networks, etc.), through the evolution of native intelligent network architecture <cit.>. However, recent developments are revolving around realizing adaptivity, in which wireless networks functionalities can be autonomously adjusted to fit within a particular scenario. The ultimate vision of self-evolving networks goes way beyond adaptivity and automation. In particular, it expands toward realizing perpetual sustainability of network performance and the flexibility to accommodate highly complex, and sometimes unfamiliar, network scenarios, and hence, this necessitates generalized, inclusive, and multi-functional schemes that are capable of handling diverse network conditions.
Accordingly, conventional ai algorithms are highly probable to fall behind in fulfilling the required performance, and therefore, a radical departure to more innovative ai-driven approaches is anticipated to shape the future of next generation wireless networks.
fm was coined by Stanford Center for Research on Foundation Models (CRFM) in 2021 and have attracted a considerable attention as generalized models that are capable of handling a wide range of downstream tasks <cit.>. In particular, fm are extremely large neural networks that are trained over massive unlabeled datasets, in a self-supervised fashion, allowing several opportunities to be reaped with reduced time and cost (that would be unbearable in case of human labeling). Rapidly after being developed, fm have found their applications in several domains, including text classification and summarizing, sentiment analysis, information extraction, and image captioning. While fm were not aimed to follow a particular model or application, language-related models, i.e., llm, are currently one of the most common subfield of fm, which rely on the principle of pretraining large models over a large-scale corpus. Such pretrained large models, e.g., bert <cit.> and gpt <cit.>, can be further fine-tuned in various downstream tasks, and hence, avoid the cost of retraining large models from scratch in the new domains.
§.§ Related Work
Focusing on text generation-related tasks, language models trained on large corpus can successfully understand the natural language, and create human-like language responses according to the specific tasks. Several domain-specific variations of well-known pretrained language models were presented in the literature to demonstrate the opportunities that can be obtained from domain-specific fine-tuning and retraining. In <cit.>, the authors proposed SCIBERT, a BERT-based language model that is fine-tuned to the scientific domain, where it was trained over corpus from scientific publications. The authors in <cit.> have considered fine-tuning BERT model using Google Patents Public datasets to perform patents classification. Furthermore, generative language model, based on multiple choice question answering, is fine-tuned using social commenting platforms in <cit.>, in order to realize zero-shot text classification. From a different perspective, the authors in <cit.> proposed a Universal Language Model Fine-tuning (ULMFiT) approach for fine-tuning generative large models for enhanced text classification. The proposed scheme in <cit.> demonstrated reduced error by 18-24% up to six-class text classification, when tested over general Wikipedia articles. Cross-domain sentiment analysis through fine-tuning BERT and XLNet models is proposed in <cit.>, in which the fine-tuned model showed promising results with less amount of data. The authors in <cit.> explored several active learning strategies to adapt bert model into customers transactions application to classify transactions to different market-related categories, for improved market demands understanding. Targeting different domain, the work in <cit.> presents BertAA, a framework for bert fine-tuning for authorship classification purposes, in which public datasets, e.g., IMDb, are utilized to refine the bert model and enable it to extract the characteristics of authors' identities from the provided text. The proposed work in <cit.> showed 5.3% improved in the authorship attribute task. From a language perspective, multi-lingual and single-lingual frameworks were presented in the literature to fine-tune/retrain a pre-trained bert model in order to allow the model to deal with different languages, e.g. Chinese <cit.>, Russian <cit.>, Arabic <cit.>. The presented results in <cit.>-<cit.> demonstrated the robustness of bert as a large model for different languages. For healthcare applications, the authors in <cit.> provided a framework for disease name recognition, where a bert model, fine-tuned using data pertinent to disease knowledge, demonstrated an enhanced performance compared to the literature.
§.§ Contributions
While the field of domain-specific fine-tuning of large generative models is very active and several contributions were presented for different domains, the telecom domain is still almost untouched. We strongly believe that adapting various large generative models to the telecom domain is a key building block in the development of self-evolving networks, where such models can play an essential role through the different stages of designing, building, and operating wireless networks. The advantages of large Telecom language models are envisioned to be particularly important with the rise of generative agents paradigm <cit.>, in which llm implemented at Telecom networks will require a comprehensive understanding of the Telecom terminologies, and their relationship with different network operational and configuration functions, in order to enable them to communicate meaningfully and to perform Telecom-specific downstream tasks when implemented in future wireless networks.
Within this context, in <cit.>, the authors have focused on adapting BERT-like model to the telecom domain, where the considered model is pretrained/fine-tuned in order to perform a question answering downstream task within the telecom domain. Note that the work in <cit.> is constrained by the small dataset used (few hundreds of technical documents and web articles), which was prepared in a manual manner as follows. The data were acquired from technology specification files of the 3gpp, and it was collected from 347 telecom-related documents, resulting in 2,021 question-answer pairs only. It is worthy to note that the dataset used in <cit.> is not publicly available. For enabling a holistic understanding of Telecom language, a comprehensive dataset comprising a wide-range of technical discussion pertinent to different network operational, configuration, and design parameters need to be generated and used in the pretraining/fine-tuning process. Motivated by this, in this paper, we develop a framework for adapting pretrained generative models, including bert, DistilBERT, RoBERTa, and GPT-2 models, to the Telecom domain, through exploiting a huge number of technical documents that consist of technical specification from 3gpp standard. Among different language models, the selection of considered models are motivated by the fact that it generates a contextual representation for each word, while considering previous and following words, rendering it a well-suited model for technical text classification.
The main contributions of our work are summarized as follows:
* Create an annotated large Telecom datasets from 3gpp technical specification of various wg, including technical pertinent to rf spectrum usage, network architecture, radio interface protocols, signaling procedures, and mobility management, network architecture, system interfaces, security, qos, network management, routing, switching, and control functions.
* Adapt the pre-trained BERT, DistilBERT, RoBERTa, and GPT-2 model into the Telecom domain, through fine-tuning the models for 3gpp tdoc text classification. The fine-tuned models allow to identify a particular technical text, related to 3gpp cellular architecture category, i.e., ran, sa, or ct, with characterizing the wg corresponding to each category.
The remaining of the paper is organized as follows. In Sec. <ref> we detail the developed approach to adapt the pre-trained models to Telecom domain, including solutions on data collection, data pre-processing, and model fine-tuning. Experimental results with performance analysis are discussed in Sec. <ref>. Finally, the paper is concluded in Sec. <ref>.
§ METHOD
§.§ llm for Telecom Language Classification
In this work, we use bert, DistilBERT, RoBERTa, GPT-2 language models, which are trained on large amounts of unlabeled textual data using self-supervised or contrastive learning <cit.>. These models can be adapted to various downstream tasks via fine-tuning. Specifically, the architecture of BERT and its variants allows it to understand the context and meaning of words in a sentence by taking into account the surrounding words on both sides of the target word. This bidirectional approach helps the pre-trained model to capture more complex relationships between words and their contextual meaning, making it a powerful tool for text classification.
The following models are implemented in our work: 1) Pretrained BERT-Base (uncased): contains 12 layers, 768 hidden units, 12 self-attention heads, and 110M parameters; 2) DistilBERT: a lighter version of BERT-Base (uncased) with 40% less parameters, which is particularly useful for wireless network with constrained resources; 3) RoBERTa: contains the same architecture as BERT, with byte-level bpe as a tokenizer is used, which operates at the byte level instead of the traditional character or subword levels; 4) GPT-2: the smallest version with 6 layers, 36 hidden units, 48 self-attention heads, and 124M parameters. A linear classification layer with SoftMax function is added to the pre-trained models to produce the wgs.
In order to adapt the selected models into the desired Telecom domain for the downstream task of text classification, we consider the single-task single-label fine-tuning approach <cit.> for 3gpp tdoc classification. A cross entropy loss function is used to update the pre-trained model weights. For efficient fine-tuning, we employed a batch size of 32, ensuring a balance between computational efficiency and memory requirements. The learning rate was set to 2e-5, enabling gradual convergence to an optimal solution. To prevent overfitting, we applied L2 regularization with a rate of 0.01. Also, F1 score is considered to evaluate the performance of the tuned models.
§.§ 3GPP Technical Document Dataset
3gpp is the main Standard Developing Organization (SDO) in the area of Telecommunication. The universal standards for 3G, 4G and 5G have been developed by 3gpp since 1999. 3gpp works with tdoc contributed by companies during the development phase and produces technical specifications as a final output. The specification work is carried out in Technical Specification Groups (TSGs). There are three Technical Specifications Groups: ran, sa, and ct. Each TSG consists of multiple wg focused on specific areas, ranging from radio access network specifications, core network specifications, service requirements and specifications, and architecture and protocols for mobile communication systems, to qos and performance requirements, security and privacy in mobile communication systems, interoperability and compatibility requirements, network management and operation, and testing and certification procedures. These topics are further divided into specific subtopics, and each tdoc file may focus on one or more of these areas. The content of tdoc files is typically technical and detailed, intended for experts and engineers involved in the development and implementation of mobile communication systems. Thus, The ability to classify a text into one of the wg requires a deep understanding of the functions and scope of each group.
In this paper, the technical documents are acquired from the 3GPP website. The collected files belong to years 2009-2023 and include technical specifications put by different wg, including, RAN1, RAN2, RAN3, RAN4, RAN5, SA1, SA2, SA3, SA4, SA5, SA6, CT1, CT3, CT4, CT6. The tdoc files are available as ZIP files, and accordingly Apache Tika application <cit.> is used to unzip and extract the information from the files. Table <ref> demonstrates the size of the dataset acquired from 3gpp wg, where documents belonging to years 2009-2019 are used for training, while documents related to years 2020-2023 are exploited for testing.
§.§ Data Pre-Processing
We pre-process the 3GPP tdoc files via following steps:
* Parse the HTML tags in the text and return the text content without any HTML tags using BeautifulSoup.
* Remove any URLs (web links) from the text: identify the regex pattern that matches URLs starting with either "http" or "https" and may include alphanumeric characters, special characters, and encoded characters.
* Remove tables from the parsed HTML document using BeautifulSoup.
* Divide each document into multiple text segments with different number of words extracted from natural language toolkit (NLTK). This allows us to evaluate the model's capability of understanding technical descriptions in different lengths.
* Remove headers, footers, captions, and pseudo codes, while ensuring each paragraph doesn't exceed a particular maximum length. Also, we eliminate the references section and all the text afterward.
* Remove cr, drafts, templates due to their limited technical information.
§ EXPERIMENT RESULTS AND DISCUSSIONS
In this section, we present experimental results to demonstrate the accuracy of the fine-tuned llm in understanding and classifying technical text within the Telecom domain. We split 3gpp tdoc into training, validation, and test datasets. Specifically, the test set contains textual segments of tdocs from 2020 to 2023 (April). The training and validation sets contains: 1) tdocs from 2010 to 2019 ('10-'19); and 2) tdocs from 2015 to 2019 ('15-'19). The proportion of these two datasets is 80% and 20%. The number of words within a textual segment in training, validation and test set is 200 in what follows without further mentioning.
We start by comparing the performance of different LLMs fine-tuned with 3GPP files from 2015 to 2019 in terms of classification accuracy as illustrated in Fig. <ref>. The selected models have the following sizes, BERT (117M), RoBERTa (125M), GPT-2 (124M) and DistilBERT (66M). Considering 100% of the files, while it can be observed that all models experience relatively close accuracy, GPT-2 model encounters the weakest performance. This is motivated by the fact that for text classification, it is important for the large model to have concise and interpretable predictions features rather than generative capabilities, where the latter is the key element of GPT-2. Meanwhile, RoBERTa, the optimized version of BERT, demonstrates the strongest performance. It can be noticed further that the performance gap increases as the number of tdoc files decreases.
In Fig. <ref>, we evaluate the accuracy of prediction and the receiver operating characteristic - area under the curve (ROC-AUC) as a function of the portion of textual segments used for fine-tuning a bert model. We can observe that a bert model fine-tuned to 3gpp files from 2015 to 2019 can achieve an accuracy performance around 80% even if only 20% of the text segments are used. Furthermore, although fine-tuning to Telecom language is essential, it can be demonstrated that tdoc files from recent years are sufficient to provide the needed accuracy. On the other hand, when the number of files is relatively small (below 10%), data 2010-2015 produce better understanding of the Telecom technical language.
The role of 3GPP wg vary from one to another. Therefore the structure of files and especially the number of files available differs distinctly. For examples, RAN1 and RAN2 contains much more files than other wg given that they are main categories in RAN (specifying PHY, MAC, RLC, PDCP layers), and hence, more activities pertinent to these layers are conducted within these two groups. To show the impact of different wg on the performance of the classification, the accuracy of a BERT model fine-tuned by 3GPP files is illustrated in Table <ref> for different combinations of wg. It can be noticed that the fine-tuned model can achieve better classification accuracy for textual segment among RAN1, SA1 and CT1 than the combination of RAN1, RAN2 and RAN3. This is stemmed from the fact that tdoc files belonging to different category number but fall within the same TSG are highly correlated, and therefore, the probability of error is higher. In contrast, technical files within different TSGs comprises relatively uncorrelated topics, and therefore, the model has a higher capability to distinguish between these different TSGs. The presented results in Table <ref> reveal that the test accuracy is determined mainly by the documents of RAN1, RAN2 and RAN3.
In Fig. <ref> we evaluate the impact of the length of technical text segments to the accuracy of classification. This is a critical aspect to be studied, as it is important to know the minimum amount of text required by the tuned LLMs to realize accurate identification of the technical groups. We set the maximum number of words for training and validation to 200 and we vary the number of words during the testing process. We can observe that the accuracy increases as the number of words grows. However, performance enhancement difference starts to decrease as well as the number of words increases, indicating the significant role of selecting the optimum size for a llm, in order to strike a balance between performance and computing complexity.
§ CONCLUSION
Motivated by the promising potentials of llm, in this paper, we proposed a framework for 3GPP technical documents identification, where we leveraged pre-trained language models, fine-tuned using 3GPP data, in order to allow the model to identify the 3GPP specification categories with the corresponding working group. In more details, we have considered bert, DistilBERT, RoBERTa, and GPT-2 models, in which they are fine-tuned using 3GPP tdoc belonging to TSGs, namely ran, sa, and ct. The obtained results demonstrate the applicability of adapting a pre-trained language model into the Telecom domain, where all fine-tuned models showed accurate classification performance under different scenarios. It is important to emphasize the significance of developing llm that are capable of understanding the Telecom language, as a cornerstone to enable autonomous networks driven by intelligent generative agents.
IEEEtran
|
http://arxiv.org/abs/2306.03988v1
|
20230606195002
|
Learn the Force We Can: Multi-Object Video Generation from Pixel-Level Interactions
|
[
"Aram Davtyan",
"Paolo Favaro"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
empty
[
Learn the Force We Can:
Multi-Object Video Generation from Pixel-Level Interactions
Aram Davtyan
University of Bern
Bern, Switzerland
[email protected]
Paolo Favaro
University of Bern
Bern, Switzerland
[email protected]
July 31, 2023
==========================================================================================================================================================
type=figure
90[2.4cm][c]Robot motion
[width=2.4cm]3Figures/bair_robot_int/image_014
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
90[2.4cm][c]Object motion
[width=2.4cm]3Figures/bair_rotate/000007
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
0[2.4cm][c]playable videos snapshots
figure
Examples of videos generated through controlled motions by on the BAIR dataset. All videos are generated autoregressively by starting from a single image and then by providing control inputs in the form of 2D shifts (shown as red arrows superimposed to the frames).
To play the videos in the first column on the left, view the paper with Acrobat Reader.
]
We propose a novel unsupervised method to autoregressively generate videos from a single frame and a sparse motion input. Our trained model can generate realistic object-to-object interactions and separate the dynamics and the extents of multiple objects despite only observing them under correlated motion activities.
Key components in our method are the randomized conditioning scheme, the encoding of the input motion control, and the randomized and sparse sampling to break correlations.
Our model, which we call , has the ability to move objects without physically touching them. We show both qualitatively and quantitatively that accurately follows the user control, while yielding a video quality that is on par with or better than state of the art video generation prior work on several datasets.
For videos, visit our project website <https://araachie.github.io/yoda>.
§ INTRODUCTION
According to Judea Pearl and Dana Mackenzie <cit.>, “a causal learner must master at least three distinct levels of cognitive ability: seeing, doing and imagining.”
The achievement of these levels seems to correlate well with the development of skills in organisms and with their chances of survival.
For instance, organisms that learned how to use a tool (an example of doing), have better means to defend themselves beyond their genetic endowments, to capture preys and to build other useful structures, such as shelters.
Among these levels, imagining is considered the most powerful capability.
Imagining is the ability to ask and answer counterfactual questions such as “What would happen if X had moved differently than how it did in the past?”
In this work, we aim at building models that can learn how to answer such counterfactual questions through the generation of video in a completely unsupervised fashion. Learning is allowed only through the passive observation of videos, i.e., without the ability to explicitly interact with the scene, and without any per-sample manual annotation.
We do not aim to learn causal relationships <cit.>, but rather we aim more conservatively to build a model that can show some degree of generalization from the training data, i.e., that can generate data that was not observed before, that is directly related to our question, and that is plausible, as shown in Figure <ref>. We focus on counterfactual questions that can be formulated in the form of a motion specification, i.e., “What would happen if the object containing pixel would move at location?”
The questions are posed by providing the current frame, context frames (a subset of the past frames), and a motion control, i.e., a shift at a pixel (but the model can also take multiple shifts at once and also at different time instants). The answer is a generated subsequent frame that shows how the input frame would change under the specified motion.
We train our model without specifying what objects are (i.e., we do not use information about object categories), or where they are (i.e., we do not rely on bounding boxes, landmarks/point annotations or segmentation masks), or how they interact (i.e., we do not make use of information on object relationships or action categories or textural descriptions of the scenes).
Given two subsequent frames (the current and following one) from a video in our training data, we obtain the motion control input by sampling the estimated optical flow at a few (typically 5) locations.
To generate frames we use flow matching <cit.>, where the model is conditioned on the current and past frames <cit.>, and feed the motion control input through cross-attention layers in a transformer architecture.
We call our approach , as it shows the ability of moving objects without touching them.
As shown in Figure <ref>, an emerging property of our proposed approach is that the model learns the physical extent of objects in the scene without ever requiring explicit supervision for it. For example, although the control is applied to the handle of the brush in the second row, the motion is applied correctly to the whole brush, and to the whole brush only. A second learned property can be observed on the first row: When the robot arm is driven towards other objects, it interacts with them realistically. In the second row, we can observe a third remarkable capability that the model has learned. The generated video shows that we can directly rotate a single object, without using the robot arm to do so. This is a video that has never been observed in the dataset (all objects are moved directly by the arm or indirectly via other objects).
It does demonstrate empirically that the model has the ability to imagine novel plausible outcomes when the reality is modified in ways that were not observed before.
Our contributions can be summarized as follows
* We introduce a model for controllable video synthesis that is trained in a completely unsupervised fashion, is not domain-specific, and can scale up to large datasets;
* We introduce an effective way to embed motion information and to feed it to the model, and show analysis to understand the impact of sampling and the use of sparsity of the motion field;
* We demonstrate for the first time multi-object interactions in the unsupervised setting on real data, which has not been shown in other state of the art methods <cit.>.
§ PRIOR WORK
Video generation. An increased interest in video generation has followed the success of generative models for images <cit.>. In contrast to image generation, video generation is plagued by problems such as rendering realistic motion, capturing diversity (, modeling the stochasticity of the future outcomes) and, most importantly, managing the high computational and storage requirements. Conventional approaches to video generation are autoregressive RNN-based models <cit.>. RNNs are expected to generate sequences with a consistent motion, because of the conditioning on the previously generated frames. Other models instead obtain consistency through the direct generation of a predefined number of frames <cit.>. Variability (stochasticity) of the generated sequences has been tackled with GANs <cit.>, variational approaches <cit.>,
Transformers <cit.> and diffusion-based approaches <cit.>. The recently proposed autoregressive method RIVER <cit.> deals with the stochastic nature of the generative process through flow matching <cit.>. Among all above approaches, uses RIVER as a backbone, because of its efficiency and ease of training.
Controllable video generation work mostly differs in the nature of the control signals. Control can be defined per frame <cit.>, or as a global label <cit.>. Some of them are obtained via supervision <cit.>, or discovered in an unsupervised manner <cit.>. For instance, CADDY <cit.> learns a discrete action code of the agent that moves in the videos. Another model, GLASS <cit.> decouples the actions into global and local ones, where global actions, as in , are represented with 2D shifts, while local actions are discrete action codes, as in <cit.>. In <cit.> the authors explicitly separate the foreground agent from the background and condition the generation on the transformations of the segmentation mask. However, all these models are restricted to single agent videos, while successfully models multiple objects and their interactions. <cit.> are the most similar works to ours as they specify motion control at the pixel-level. However, <cit.> leverages a pre-trained object detector to obtain the ground truth control. <cit.> is based on warping and therefore does not incorporate memory to model long-range consequences of actions. II2V <cit.> uses a hierarchical RNN to allow modeling higher-order details, but focuses on deterministic prediction. iPOKE <cit.> aims to model stochasticity via a conditional invertible neural network, but has to sacrifice the ability to generate long videos and to intervene into the generation process at any timestamp. None of those works has demonstrated controllable video generation on multi-object real scenes.
Multi-object scenes and interactions. Modeling multiple objects in videos and especially their interactions is an extremely difficult task. It either requires expensive human annotations <cit.> or is still limited to simple synthetic scenes <cit.>. <cit.> allow for multi-agent control, but leverage ground truth bounding boxes during the training.
in turn is an autoregressive generative model for controllable video generation from sparse motion controls that i) efficiently takes memory into account to simulate long-range outcomes of the actions, ii) models stochasticity of the future, iii) does not require human annotation for obtaining the control signal(s), and iv) demonstrates controllability on a complex multi-object real dataset.
§ TRAINING
We denote with 𝐱 = {x^1, …, x^N} an RGB video that contains N frames, where x^i ∈ℝ^3× H× W, i = 1, …, N, and H and W are the height and the width of the frames respectively. The goal is to build a controllable video prediction method that allows us to manipulate separate objects in the scene. We formulate this goal as that of approximating a sampler from the following conditional distribution
p(x^k + 1 | x^k, x^k-1, …, x^1, a^k) ,
for k<N and where a^k denotes the motion control input. a^k specifies the desired shifts at a set of pixels (including the special cases with a single pixel or none). Our ultimate objective is to ensure through training that this control implicitly defines the shift(s) for the object(s) containing the selected pixel(s).
The conditioning in eq. (<ref>) allows an autoregressive generative process at inference time, where the next frame x^k+1 in a generated video is sampled conditioned on the current frame x^k, the previously generated frames x^k-1, …, x^1 and the current control a^k. To model the conditional distribution in eq. (<ref>), we use RIVER <cit.>. This is a recently proposed video prediction method based on conditional flow matching <cit.>.
We chose RIVER due to its simplicity and training efficiency compared to conventional RNNs <cit.> and Transformers <cit.> for video prediction.
For completeness, we briefly introduce Flow matching and RIVER in section <ref>. In section <ref> we show how the latter is adapted to handle control. In section <ref>, we focus on how the control signals are obtained and encoded.
§.§ Preliminaries: Flow Matching and RIVER
Flow matching <cit.> was introduced as a simpler, more general and more efficient alternative to diffusion models <cit.>.
The goal is to build an approximate sampler from the unknown data distribution q(y), given a training set of samples of y. This is formalized as a continuous normalizing flow <cit.> via the following ordinary differential equation
ϕ̇_t(y) = v_t(ϕ_t(y))
ϕ_0(y) = y.
Eq. (<ref>) defines a flow ϕ_t(y): [0, 1] ×ℝ^d →ℝ^d that pushes p_0(y) = N(y | 0, 1) towards the distribution p_1(y) ≈ q(y) along the vector field v_t(y): [0, 1] ×ℝ^d →ℝ^d.
Remarkably, <cit.> shows that one can obtain v_t(y) by solving
min_v_t 𝔼_t, p_t(y | y_1), q(y_1) L(θ),
with L(θ) = v_t(y) - u_t(y | y_1) ^2,
where one can explicitly define the vector field u_t(y | y_1) and its corresponding probability density path
p_t(y | y_1), with y_1 ∼ q(y). A particularly simple choice <cit.> is the Gaussian probability path p_t(y | y_1) = N(y | μ_t(y_1), σ^2_t(y_1)), with μ_0(y_1) = 0, μ_1(y_1) = y_1, σ_0(y_1) = 1, σ_1(y_1) = σ_min. The corresponding target vector field is then given by
u_t(y | y_1) = y_1 - (1 - σ_min) y/1 - (1 - σ_min) t.
Sampling from the learned model can be obtained by first sampling y_0 ∼ N(y | 0, 1) and then by numerically solving eq. (<ref>) to obtain y_1 = ϕ_1(y_0).
RIVER <cit.> is an extension of the above procedure to the video prediction task with a computationally efficient conditioning scheme on past frames.
The training objective of RIVER is given by
L_R(θ) = v_t(x | x^τ-1, x^c, τ - c ; θ) - u_t(x | x^τ) ^2,
where v_t is a network with parameters θ, x^τ is a frame randomly sampled from the training video, c is an index randomly sampled uniformly in the range {1, …, τ - 2} and u_t is calculated with eq. (<ref>). An additional information provided to v_t is the time interval τ-c between the target frame x^τ and the past frame x^c, which we call the context frame.
At test time, during the integration of eq. (<ref>), a new context frame x^c is sampled at each step t. This procedure enables to condition the generation of the next frame on the whole past. To further speed up the training and enable high-resolution video synthesis, RIVER works in the latent space of a pretrained VQGAN <cit.>. That is, instead of x^τ, x^τ - 1, x^c in eq. (<ref>) one should write z^τ, z^τ - 1, z^c, where z^i is the VQ latent code of the i-th frame <cit.>. Since the use of VQGAN encoding is an optional and separate procedure, we simply use x in our notation.
§.§ Learning to Master the Force
We now show how to incorporate control into eq. (<ref>) to build a sampler for the conditioning probability in eq. (<ref>). To do so, we make v_t depend on a^τ - 1, which is the motion control at time τ - 1, as shown in the following objective
L_F(θ) = v_t(x | x^τ-1, x^c, τ - c, a^τ - 1 ; θ) - u_t(x | x^τ) ^2.
In practice, we implement this conditioning by substituting the bottleneck in the self-attention layers of the U-ViT <cit.> architecture of RIVER with cross-attention blocks (see Figure <ref>). The control inputs a^τ - 1 are obtained by splitting the image domain into a grid of tiles, so that a motion control can be specified in each tile via a code, and then be fed as keys and values to the cross-attention layer. More details on a^τ-1 will be provided in the next section.
Inspired by the classifier-free guidance for diffusion models <cit.>, we propose to switch off the conditioning on both the context and motion control at every iteration, with some probability π (see Algorithm <ref>). This is done by substituting the code corresponding to a switched off context or motion control with noise. A typical value for π in our experiments is 0.5. This serves two purposes: 1) it yields a stronger model that can effectively take the conditioning into account and 2) it yields a model that can generate a video by starting from a single frame. To see 1), consider that the conditioning on both the context frame x^c and the control a^τ - 1 is redundant when the future frame can be reliably predicted given only one of the two. In these cases, the model might learn to ignore one of the inputs.
To see 2), consider that when the model generates the first predicted frame, there is no valid context frame and our procedure allows us to replace the context frame with noise. Otherwise, one would have to duplicate the first frame, for example, but this would result in an undesirable training bias.
§.§ Force Embeddings
Ideally, a^τ-1 could encode detailed motion information for the objects in the scene. For instance, a^τ-1 could describe that an object is rotating or pressing against another object or walking (in the case of a person). Such supervision could potentially provide the ability to control the video generation in detail and to generalize well to unseen object motion combinations. However, obtaining such ground truth control signals requires costly large-scale manual annotation. Similarly to <cit.>, we avoid such costs by leveraging optical flow. Essentially, instead of using a costly and detailed motion representation, we use a simpler one that can be computed automatically and at a large scale.
Given an optical flow w^τ∈ℝ^2× H × W between the frames x^τ and x^τ + 1 (obtained with a pretrained RAFT <cit.>), we define a probability density function p(i, j) ∝w^τ_ij^2 over the image domain Ω, with (i,j)∈Ω. Then, we randomly sample a sparse set 𝒮⊂Ω of n_c=|𝒮| pixel locations from p.
This distribution makes it more likely that pixels of moving objects will be selected. However, in contrast to <cit.> we do not introduce additional restrictions to the sampling or explicitly define the background. This is an essential difference, because in multi-object scenes, objects that belong to the background in one video might be moving in another. Thus, in our case, one cannot use the magnitude of the optical flow to separate objects from the background in each video.
To condition the video generation on the selected optical flow vectors, we introduce an encoding procedure. First, we construct a binary mask m ∈{0, 1}^1 × H × W such that m_ij = 1, ∀ (i, j) ∈ S and m_ij = 0 otherwise. This mask is concatenated in the channel dimension with m ⊙ w^τ to form a tensor w̃^τ of shape (3, H, W), which we refer to as the sparse optical flow. w̃^τ is further tiled into a 16 × 16 grid. Each of these tiles is independently projected through an MLP and augmented with a learnable positional encoding to output a code (see Figure <ref>). This particular design of the optical flow encoder is a trade-off between having a restricted receptive field (because each tile is processed independently) and efficiency. A small receptive field is needed to ensure that separate controls minimally interact before being fed to the cross-attention layers (see Figure <ref>). We found that this is crucial to enable the independent manipulation of separate objects.
§ EXPERIMENTS
In this section, we evaluate to see how controllable the video generation is, i.e., how much the generated object motion correlates with the input motion control (see next section), and to assess the image and sequence quality on three datasets with different scene and texture complexities, as well as different object dynamics. In the latter case, we report several video quality metrics, such as FVD <cit.> and average LPIPS <cit.>, PSNR, SSIM <cit.> and FID <cit.>.
Implementation and training details are provided in the supplementary material.
§.§ Evaluation of Intended vs Generated Motion
The objective of our training is to build a model to generate videos that can be controlled by specifying motion through a^τ-1 as a set of shifts at some user-chosen pixels. To evaluate how much the trained model follows the intended control, we introduce the following metrics: Local and global errors.
To compute them, we sample an image from the test videos. Then, we randomly select one object in the scene and apply a random motion control input to a pixel of that object. Because the generated images are of high-quality, we can use a pre-trained optical flow estimation model <cit.> to calculate the optical flow between the first image and the generated one. In principle, one could measure the discrepancy between the input motion shift and the generated one at the same pixel. However, since single pixel measurements are too noisy to use, we assume that all pixels within a small neighborhood around the selected pixel move in the same way, and then average the estimated optical flow within that neighborhood to compare it to the input control vector (depending on what we focus on, we use the relative L_2 norm or a cosine similarity). We call this metric the local error (see Figure <ref> on the left in blue).
Notice that in some cases, the model could use the motion input to generate a rotated object, instead of a translated one. Also, the chosen neighborhood (whose size is fixed) may not fully cover only the object of interest. These issues make this metric quite noisy.
Nonetheless, it still provides a useful approximation of the average response of the trained model to the control inputs.
We also assume that the generated motion should be zero sufficiently far away from where the motion control is applied, although in general a local motion could cause another motion far away. To assess this, we calculate the average L_2 norm of the estimated optical flow outside some neighborhood of the controlled pixel. We call this metric the global error (see Figure <ref> on the right in orange). This measurement tells us the spatial extent of the learned motion control correlations.
§.§ Datasets
We evaluate on the following three datasets:
CLEVRER <cit.> is a dataset containing 10K training and 1000 test videos capturing a synthetic scene with multiple simple objects interacting with each other through collisions. We cropped and downsampled the videos to 128× 128 resolution. On CLEVRER we show the ability of our model to model complex cascading interactions and also to learn the control of long-term motions (i.e., motions that once started at one frame can last for several frames).
BAIR <cit.> is a real dataset containing around 44K videos of a robot arm pushing toys on a flat square table. The resolution of these videos is 256× 256 pixels. The visual complexity of BAIR is much higher than that of CLEVRER. The objects on the table are diverse and include non-rigid objects, such as stuffed toys, that have different physical properties. However, in contrast to CLEVRER, its interactions are simpler and do not require modeling long-term dynamics.
iPER <cit.> captures 30 humans with diverse styles performing various movements of varying complexity. The official train/test split separates the dataset into 164 training and 42 test clips. Although our main focus is to work with multi-object datasets, we use this dataset for two reasons: 1) We can test how learns to control articulated objects, such as humans; 2) We can compare to the related work iPOKE <cit.>, which has already been tested on this dataset (and is not designed for multi-object datasets).
§.§ Ablations
Sparse optical flow encoder. First, we show the importance of using a sparse optical flow encoder with a restricted receptive field. We observe that such an encoder is essential for the model to learn to independently control different objects in the scene. We manually annotated 128 images from the test set of the BAIR dataset. For each image we store a list of pixel coordinates that belong to objects in the scene, 1-2 pixels per object, 3 pixels for the robot arm. We use the local error to compare our encoder with a convolutional sparse optical flow encoder from <cit.>. In Figure <ref>, we show that our restricted receptive field optical flow encoder outperforms the convolutional one.
Randomized conditioning. In Table <ref>, we demonstrate the importance of our randomized conditioning scheme, where we randomly switch off the conditioning with respect to both the past frame and the control input with probability π. The comparisons on the CLEVRER dataset show that randomized conditioning is crucial for the single frame quality as well as for temporal consistency (see the FVD metric).
Number of control inputs during the training. n_c plays an important role in enabling the independent control over separate objects. In principle, responses to the control should be more and more decorrelated as we increase the number of control inputs.
However, values of n_c that are too large would increase the gap between the training and the test conditions, where often only 1 control is used. At the same time, using fewer optical flow vectors during the training stimulates the network to learn interactions (i.e., correlations with other objects). Therefore, we observe a trade-off between controllability and learning correct dynamics in the choice of n_c (see Figure <ref>). We choose n_c=5 in the remaining experiments.
§.§ Quantitative results
Realism and motion consistency. Following <cit.>, we train on BAIR and then autoregressively generate 29 frames given the first frame and a set of controls at each generation step, decoded from the ground truth videos (in our case from the corresponding optical flows). We report the metrics for 1 and 5 control vectors at each timestamp and show that the 5 controls version outperforms all prior work (see Table <ref>). Moreover, notice that prior work on controllable generation on BAIR only focuses on modeling the actions of the robot arm, while is able to also effectively control other objects in the scene.
We also compare our model on the benchmark introduced in <cit.> and generate 9 frames starting from a single initial frame. Although our model does not outperform <cit.>, it does better than all the other prior work, as indicated by our FVD metric (see Table <ref>). One should also notice that does not make use of the same information used in iPOKE <cit.> (e.g., what a background is).
Scene dynamics and interactions. We chose to assess how well models the physics of the scene and object interactions on the CLEVRER dataset. For this purpose, we generate 15 frames starting from a single frame and a set of control vectors. This time we do not specify future controls like in BAIR and let the model simulate the learned physics. We repeat the experiment with 1 and 5 control vectors. The metrics in Table <ref> show that models the interior object dynamics well, which is further supported by the qualitative results in section <ref>.
Controllability. We asses how robust is to changes in the control parameters, such as the direction and the magnitude of the control vectors. We manually annotated 128 images from the CLEVRER dataset by indicating 1 potential control point per object. We then sample some points and random control vectors from the annotated ones and feed those to the model to generate the next frame. We calculate the error between the control vectors and the average computed optical flow <cit.> in the neighborhood of the interacted pixel. We use the cosine distance between the normalized vectors as an indicator of the accuracy of the control. We show how this metric changes with the parameters of the control in Figure <ref>.
§.§ Qualitative results
In this section we provide some visual examples of the generated sequence. On the BAIR dataset, we highlight the capability of to move, rotate and deform separate objects without the robot arm physically touching them. Figures <ref> and <ref> show different object manipulations on the BAIR test set.
Notice how the model has learned also the 3D representations and interactions of the objects.
Figure <ref> shows the diversity of generated videos that share the same initial frame, but use different control signals. Notice the high correlation between the intended and generated motions and the ability of the model to correctly predict the interactions between the colliding objects.
Since our model is based on Transformers <cit.>, we report the attention maps from the last layer of the network between the interacted location and the rest of the image (see middle column in Figure <ref>). These attention maps often correspond to coarse segmentations of the controlled objects.
Finally, we provide some control sequences on the iPER dataset to show that simulates realistic motions of humans (see Figure <ref>).
More qualitative results, including possible applications of , are in the supplementary materials.
§ CONCLUSION
In this paper we introduced , a novel method for controllable video generation from sparse motion input. Our experimental evaluation demonstrates the ability of to generate realistic multi-object videos, which involves learning the extents and interactions between multiple objects despite only passively observing their correlated motions.
Acknowledgements
This work was supported by grant 188690 of the Swiss National Science Foundation.
§ ARCHITECTURE AND TRAINING DETAILS
Autoencoder. We trained a VQGAN <cit.> autoencoder per dataset, using the official repository.[<https://github.com/CompVis/taming-transformers>] The configurations of the VQGANs can be found in Table <ref>. Notice that we did not use a discriminator for the CLEVRER <cit.> dataset.
Sparse optical flow encoder. The 16× 16 tiled grid of sparse optical flow inputs is reshaped and linearly projected to 256 256-dimensional vectors that are fed into 5 subsequent blocks of (batch normalization <cit.>, fully-connected layer and gelu activation <cit.>). The activation is omitted in the last block. This procedure results into a representation of the control input as 256 256-dimensional vectors.
Training. All models are trained for 300K iterations with the AdamW <cit.> optimizer with the base learning rate equal to 10^-4 and weight decay 5· 10^-6. A learning rate linear warmup for 5K iterations is applied as well as a square root decay schedule.
For the CLEVRER <cit.> dataset, following <cit.>, random color jittering is additionally used to prevent overfitting. For the iPER <cit.> dataset, a random horizontal flip and a random time reversal is applied to prevent overfitting to the pose of the human.
As suggested in <cit.>, we used σ_min = 10^-7 in the flow matching loss <cit.> in all the experiments. In the final models, we used n_c=5 optical flow vectors for the control input.
§ QUALITATIVE RESULTS
In this section we provide more qualitative results with . For videos, please, visit our project's website <https://araachie.github.io/yoda>. Figures <ref> and <ref> contain some selected sequences to demonstrate the quality of separate frames, which is not visible in the videos on the webpage due to gif compression. Figure <ref> contains more object manipulation scenarios on the BAIR <cit.> dataset. Figure <ref> provides generated videos capturing long-range consequences of the input controls on the CLEVRER <cit.> dataset. Notice the ability of to realistically model the physics of the scene. was designed to make it possible to intervene into the generation process at any timestamp, which allows communicating new impulses to the objects on the fly.
§ DEMO
To ease the access to our results, we designed a demo of . The user can load the pretrained models and directly interact with the scene by dragging objects with the mouse. The model responds by generating a subsequent frame taking into account the controls specified by the user as well as the previously generated images (see the video on the website). The demo and the code will be released to the public.
§ APPLICATIONS
Pretrained generates realistic responses to control. This opens up the opportunity to interact with the scene in a counterfactual way, which makes it possible to apply to downstream tasks. In theory, by optimizing the control inputs to match the given video, one can solve planning or compress the video to a single frame and a sequence of controls. Here we discuss another possible application, object segmentation.
Given an image x of a scene with multiple objects and trained on videos capturing that scene, the user selects a pixel on the object of interest. A random control is then applied to that location to generate the response x' with . The optical flow field w between x and x' is calculated. All the vectors in w are compared with the input control using some similarity measure (can be l_2 or cosine distance). We then apply a threshold to the result to obtain the segmentation mask. For robustness, this procedure can be repeated multiple times and the union of the resulting masks can be used as the final estimator of the segmentation mask. For an example, see Figure <ref>.
ieee_fullname
|
http://arxiv.org/abs/2306.08974v1
|
20230615091148
|
Algorithmic Cluster Expansions for Quantum Problems
|
[
"Ryan L. Mann",
"Romy M. Minko"
] |
quant-ph
|
[
"quant-ph",
"cs.CC",
"cs.DS",
"math.CO"
] |
apsrev4-2
[email protected]
http://www.ryanmann.org
Centre for Quantum Computation and Communication Technology, Centre for Quantum Software and Information, School of Computer Science, Faculty of Engineering & Information Technology, University of Technology Sydney, NSW 2007, Australia
School of Mathematics, University of Bristol, Bristol, BS8 1UG, United Kingdom
School of Mathematics, University of Bristol, Bristol, BS8 1UG, United Kingdom
We establish a general framework for developing approximation algorithms for a class of counting problems. Our framework is based on the cluster expansion of abstract polymer models formalism of Kotecký and Preiss. We apply our framework to obtain efficient algorithms for (1) approximating probability amplitudes of a class of quantum circuits close to the identity, (2) approximating expectation values of a class of quantum circuits with operators close to the identity, (3) approximating partition functions of a class of quantum spin systems at high temperature, and (4) approximating thermal expectation values of a class of quantum spin systems at high temperature with positive-semidefinite operators. Further, we obtain hardness of approximation results for approximating probability amplitudes of quantum circuits and partition functions of quantum spin systems. This establishes a computational complexity transition for these problems and shows that our algorithmic conditions are optimal under complexity-theoretic assumptions. Finally, we show that our algorithmic condition is almost optimal for expectation values and optimal for thermal expectation values in the sense of zero freeness.
Algorithmic Cluster Expansions for Quantum Problems
Romy M. Minko
July 31, 2023
===================================================
§ INTRODUCTION
The classification of the computational complexity of quantum problems is important for understanding the capabilities and limitations of quantum computing. These problems include the computation of probability amplitudes, expectation values, partition functions, and thermal expectation values. In this paper we consider the classification of such problems in the sense of approximate counting. We establish a general framework for developing approximation algorithms and hardness of approximation results for a class of counting problems. By applying this framework, we are able to obtain efficient approximation algorithms and hardness of approximation results for several quantum problems under certain algorithmic conditions.
Our algorithmic framework is based on the cluster expansion of abstract polymer models formalism of Kotecký and Preiss <cit.>. We consider polymers that are connected subgraphs of bounded-degree bounded-rank multihypergraphs with compatibility relations defined by vertex disjointness. The key insight underlying our framework is that when the polymer weights decay sufficiently fast, computing the truncated cluster expansion to sufficiently high order allows us to obtain a multiplicative approximation to the abstract polymer model partition function. Our framework can be viewed as a straightforward generalisation of the framework of Helmuth, Perkins, and Regts <cit.>, and Borgs et al. <cit.> from the case of bounded-degree graphs to bounded-degree bounded-rank multihypergraphs. This approach is closely related to that of Patel and Regts <cit.> using Barvinok's method <cit.>; see Ref. <cit.> for a survey of this method.
Our results concerning the approximation of quantum problems may be summarised as follows. We obtain efficient algorithms for (1) approximating probability amplitudes of a class of quantum circuits close to the identity, (2) approximating expectation values of a class of quantum circuits with operators close to the identity, (3) approximating partition functions of a class of quantum spin systems at high temperature, and (4) approximating thermal expectation values of a class of quantum spin systems at high temperature with positive-semidefinite operators. Our approach offers a simpler and sharper analysis compared to existing algorithms. Our algorithmic results are summarised in Table <ref>.
Our hardness of approximation framework is based on reductions from the Ising model partition function. We apply this framework to obtain hardness of approximation results for approximating probability amplitudes of quantum circuits and partition functions of quantum spin systems. This establishes a computational complexity transition for these problems and shows that our algorithmic conditions are optimal under complexity-theoretic assumptions. Further, we show that our algorithmic condition is almost optimal for expectation values and optimal for thermal expectation values in the sense of zero freeness.
This paper is structured as follows. In Section <ref>, we introduce the necessary preliminaries. Then, in Section <ref>, we establish our algorithmic and hardness of approximation framework. In Section <ref>, we apply our framework to several quantum problems. Finally, we conclude in Section <ref> with some remarks and open problems.
§ PRELIMINARIES
§.§ Graph Theory
A multigraph is a graph in which multiple edges between vertices are permitted. A hypergraph is a graph in which edges between any number of vertices are permitted. A multihypergraph is a graph in which multiple edges between vertices and edges between any number of vertices are permitted. We shall assume that the edges in a multihypergraph are uniquely labelled, that is, all edges are considered distinct. Let G=(V, E) be a multihypergraph. We denote the order of G by G=V(G) and the size of G by G=E(G). The maximum degree Δ(G) of G is the maximum degree over all vertices of G and the rank r(G) of G is the maximum cardinality of an edge of G. The distance d(u, v) between two vertices u and v in G is defined as the size of the shortest path connecting them. A multihypergraph is called Δ-regular if all the vertices have degree Δ and called r-uniform if all the edges have cardinality r.
§.§ Abstract Polymer Models
An abstract polymer model is a triple (𝒞, w, ∼), where 𝒞 is a countable set of objects called polymers, is a function that assigns to each polymer γ∈𝒞 a complex number w_γ called the weight of the polymer, and ∼ is a symmetric compatibility relation such that each polymer is incompatible with itself. A set of polymers is called admissible if the polymers in the set are all pairwise compatible. Note that the empty set is admissible. Let 𝒢 denote the collection of all admissible sets of polymers from 𝒞. The abstract polymer partition function is defined by
Z(𝒞,w) ∑_Γ∈𝒢∏_γ∈Γw_γ.
The archetypal example of an abstract polymer model is the independence polynomial. Let G=(V, E) be a graph and let ℐ denote the collection of all independent sets of G. Recall that an independent set of G is a subset of vertices with no edges between them. The independence polynomial I(G;x) of G is a polynomial in x, defined by
I(G;x) ∑_I∈ℐx^I.
This corresponds to an abstract polymer model (𝒞, w, ∼) as follows. The polymers 𝒞 are the vertices V of G, the weight function w is given by w_γ=x for all γ∈𝒞, and two polymers are compatible if and only if there is no edge between them in G. An admissible set of polymers is then an independent set of G, and it follows that the partition function of this model Z(𝒞,w) is precisely the independence polynomial I(G;x) of G. The abstract polymer model can be viewed as a generalisation of the independence polynomial. In particular, it attempts to capture the independence properties of a problem.
A useful tool for representing a problem as an abstract model is the principle of inclusion-exclusion. The principle is formalised by the following well-known lemma (see for example <cit.>); we provide a proof for completeness.
Let f be a function defined on the subsets of finite set E, then
f(E) = ∑_S ⊆ E(-1)^S∑_T ⊆ S(-1)^Tf(T).
By interchanging the summations, we have
∑_S ⊆ E(-1)^S∑_T ⊆ S(-1)^Tf(T) = ∑_T ⊆ E(-1)^Tf(T)∑_T ⊆ S ⊆ E(-1)^S
= ∑_T ⊆ Ef(T)∑_S ⊆ E∖T(-1)^S
= ∑_T ⊆ Ef(T)∑_m=0^E∖TE∖Tm(-1)^m.
Now, by applying the binomial theorem, we obtain
∑_S ⊆ E(-1)^S∑_T ⊆ S(-1)^Tf(T) = f(E),
completing the proof.
As we shall see in Section <ref>, several quantum problems admit an abstract polymer model representation.
§.§ Abstract Cluster Expansion
We now define the abstract cluster expansion <cit.>. Let Γ be a non-empty ordered tuple of polymers. The incompatibility graph H_Γ of Γ is the graph with vertex set Γ and edges between any two polymers if and only if they are incompatible. Γ is called a cluster if its incompatibility graph H_Γ is connected. A polymer and cluster are compatible if the polymer is compatible with every polymer in the cluster. Let 𝒢_C denote the set of all clusters of polymers from 𝒞. The abstract cluster expansion is a formal power series for logZ(𝒞,w) in the variables w_γ, defined by
log(Z(𝒞,w)) ∑_Γ∈𝒢_Cφ(H_Γ)∏_γ∈Γw_γ,
where φ(H) denotes the Ursell function of a graph H:
φ(H) 1/H!∑_S ⊆ E(H)
spanning
connected(-1)^S.
An important theorem due to Kotecký and Preiss <cit.> provides a sufficient criterion for the absolute convergence of the cluster expansion. An improved convergence criterion is given in Ref. <cit.>.
Let (𝒞, w, ∼) be an abstract polymer model and let a:𝒞→ℝ^+ and d:𝒞→ℝ^+ be functions such that
∑_γ^*γw_γ^*e^a(γ^*)+d(γ^*)≤ a(γ),
for all polymers γ∈𝒞. Then the cluster expansion for log(Z(𝒞,w)) converges absolutely, Z(𝒞,w)≠0, and
∑_Γ∈𝒢_C
Γγφ(H_Γ)∏_γ^*∈Γw_γ^*e^∑_γ^*∈Γd(γ^*)≤ a(γ),
for all polymers γ∈𝒞.
In the case of the independence polynomial, the radius of convergence is given by Shearer's bound for the Lovász Local Lemma <cit.>; this was elucidated by Scott and Sokal <cit.>. For results on the hypergraph independence polynomial see Refs. <cit.>. Note that the Kotecký-Preiss convergence criterion can be viewed as a type of local lemma.
Let · :𝒞→ℤ^+ be a function that assigns to each polymer γ∈𝒞 a positive integer γ called the size of the polymer. A useful quantity for algorithmic purposes is the truncated
cluster expansion T_m(Z(𝒞,w)) for log(Z(𝒞,w)):
T_m(Z(𝒞,w)) ∑_Γ∈𝒢_C
Γ≥ mφ(H_Γ)∏_γ∈Γw_γ,
where Γ=∑_γ∈Γγ.
It is often convenient to consider clusters as multisets of polymers. Define a cluster to be a multiset (Γ, m_Γ) of polymers Γ with multiplicity function m_Γ:Γ→ℤ^+ whose incompatibility graph is connected. Here the definition of the incompatibility graph is extended to multisets in the natural way. Let 𝒢̂_C denote the collection of all multiset clusters of polymers from 𝒞. Note that, for a given multiset (Γ, m_Γ), there are precisely (∑_γ∈Γm_Γ(γ))!/∏_γ∈Γm_Γ(γ)! tuples that correspond to it. The abstract cluster expansion may then be written as
log(Z(𝒞,w)) = ∑_(Γ, m_Γ)∈𝒢̂_Cφ̂(H_(Γ, m_Γ))∏_γ∈Γw_γ^m_Γ(γ)/m_Γ(γ)!,
where
φ̂(H) ∑_S ⊆ E(H)
spanning
connected(-1)^S.
§.§ Approximation Schemes
Let ϵ>0 be a real number. An additive ϵ-approximation to z is a complex number ẑ such that z-ẑ≤ϵ. A multiplicative ϵ-approximation to z is a complex number ẑ such that z-ẑ≤ϵz. Note that an additive-error approximation to the logarithm of a number is equivalent to a multiplicative approximation to that number. A fully polynomial-time approximation scheme for a sequence of complex numbers (z_n)_n∈ℕ is a deterministic algorithm that, for any n and ϵ>0, produces a multiplicative ϵ-approximation to z_n in time polynomial in n and 1/ϵ.
§.§ Computational Complexity
We shall refer to the following complexity classes: P (polynomial time), RP (randomised polynomial time), BQP (bounded-error quantum polynomial time), NP (non-deterministic polynomial time), and #P. For a formal definition of these complexity classes,
we refer the reader to Ref. <cit.>.
§ GENERAL FRAMEWORK
§.§ Approximation Algorithms
In this section we establish a general framework for developing approximation algorithms for abstract polymer model partition functions. We consider abstract polymer models in which the polymers are connected subgraphs of bounded-degree bounded-rank multihypergraphs and compatibility is defined by vertex disjointness. When the polymer weights of these models decay sufficiently fast, then the logarithm of the partition function can be controlled by a convergent cluster expansion. Our algorithm approximates the logarithm of the partition function by computing the truncated cluster expansion to sufficiently high order.
Our general framework is based on that of Helmuth, Perkins, and Regts <cit.> and Borgs et al. <cit.> where approximation algorithms were developed in the setting of bounded-degree graphs. Our algorithm can be viewed as a straightforward generalisation of theirs to the setting of bounded-degree bounded-rank multihypergraphs. Our main theorem is as follows.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r. Further let (𝒞, w, ∼) be an abstract polymer model such that the polymers are connected subgraphs of G and that two polymers γ and γ' are compatible if and only if V(γ) ∩ V(γ')=∅. Suppose that, for all polymers γ∈𝒞, the weight w_γ can be computed in time exp(O(γ)) and satisfies
w_γ≤(1/e^3Δr2)^γ.
Then the cluster expansion for log(Z(𝒞,w)) converges absolutely, Z(𝒞,w)≠0, and there is a fully polynomial-time approximation scheme for Z(𝒞,w).
In Section <ref> we shall apply Theorem <ref> to establish efficient approximation algorithms for several quantum problems.
Our proof of Theorem <ref> requires several lemmas. We first prove the following lemma which bounds the number of polymers of a certain size containing a particular vertex.
Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r, and let v ∈ V be a vertex. The number of connected subgraphs with m edges that contain vertex v is at most (eΔ(r-1))^m/2.
Let C_m,v(G) denote the set of connected subgraphs of G with m edges that contain the vertex v ∈ V. Further let T_Δ, r, v denote the infinite Δ-regular r-uniform linear hypertree with root v. Recall that a hypergraph is linear if the intersection of any pair of edges contains at most one vertex. Let T^⋆_Δ, r, v be the graph with vertex set {v}∪ E(T_Δ, r, v) and edges between vertices v and e ∈ E(T_Δ, r, v) if and only if v ∈ e and edges between vertices e, e' ∈ E(T_Δ, r, v) if and only if e ∩ e'≠∅ and d(v, e) ≠ d(v, e'). Note that T^⋆_Δ, r, v is a tree with maximum degree precisely (Δ-1)(r-1)+1 ≤Δ(r-1) and there is a natural bijection between C_m,v(T_Δ, r, v) and C_m,v(T^⋆_Δ, r, v). The cardinality of C_m,v(T^⋆_Δ, r, v) is at most 1/m+1(m+1)Δ(r-1)m <cit.>.
Hence, we have
C_m,v(G)≤C_m,v(T_Δ, r, v) = C_m,v(T^⋆_Δ, r, v)≤1/m+1(m+1)Δ(r-1)m≤(eΔ(r-1))^m/2,
completing the proof.
The proof of Lemma <ref> gives a slightly sharper bound of (e((Δ-1)(r-1)+1))^m/2. Improved bounds may be obtained for certain classes of multihypergraphs.
We now show that provided the polymer weights decay sufficiently fast, then the cluster expansion converges absolutely and the truncated cluster expansion provides a good approximation to log(Z(𝒞,w)). This is formalised by the following lemma which utilises the Kotecký-Preiss convergence criterion.
Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r. Further let (𝒞, w, ∼) be an abstract polymer model such that the polymers are connected subgraphs of G and that two polymers γ and γ' are compatible if and only if V(γ) ∩ V(γ')=∅. Suppose that, for all polymers γ∈𝒞, the weight w_γ satisfies
w_γ≤(1/e^3Δr2)^γ.
Then the cluster expansion for log(Z(𝒞,w)) converges absolutely, Z(𝒞,w)≠0, and for m∈ℤ^+,
T_m(Z(𝒞,w))-log(Z(𝒞,w))≤Ge^-m/2.
We introduce a polymer γ_v to every vertex v in G consisting of only that vertex. We define γ_v to be incompatible with every polymer that contains v. Then, we have
∑_γγ_vw_γe^1/2(1/r-1γ+γ)≤ e^1/2(r-1)∑_γγ_vw_γe^γ≤ e^1/2(r-1)∑_γγ_v(1/e^2Δr2)^γ,
where we have used the fact that γ≤(r-1)γ+1. By Lemma <ref>, the number of polymers γ with γ=m that are incompatible with γ_v is at most (eΔ(r-1))^m/2. Thus, we may write
∑_γγ_vw_γe^1/2(1/r-1γ+γ)≤e^1/2(r-1)/2∑_m=1^∞(2/er)^m ≤1/2(r-1).
Fix a polymer γ. By summing over all vertices in γ, we obtain
∑_γ^*γw_γ^*e^1/2(1/r-1γ^*+γ^*)≤1/2(r-1)γ.
Now by applying Theorem <ref> with a(γ)=1/2(r-1)γ and d(γ)=1/2γ, we have that the cluster expansion converges absolutely, Z(𝒞,w)≠0, and
∑_Γ∈𝒢_C
Γ∋γ_v φ(H_Γ)∏_γ∈Γw_γe^1/2Γ≤ 1.
By summing over all vertices in G, we obtain
∑_Γ∈𝒢_C
Γ≥ m φ(H_Γ)∏_γ∈Γw_γ≤Ge^-m/2,
completing the proof.
Lemma <ref> implies that to obtain a multiplicative ϵ-approximation Z(𝒞,w), it is sufficient to compute the truncated cluster expansion T_m(Z(𝒞,w)) to order m=O(log(G/ϵ)). We shall now establish an algorithm for computing T_m(Z(𝒞,w)) in time exp(O(m))·G^O(1). This requires the following two lemmas.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r. Further let (𝒞, w, ∼) be an abstract polymer model such that the polymers are connected subgraphs of G and that two polymers γ and γ' are compatible if and only if V(γ) ∩ V(γ')=∅. The clusters of size at most m can be listed in time exp(O(m))·G^O(1).
Our proof follows a similar approach to that of Ref. <cit.>. We list all connected subgraphs of G with at most m edges in time exp(O(m))·G^O(1) by depth-first search. For each of these subgraphs, we consider all ways to label the edges with positive integers such that their sum is at most m in time exp(O(m)). For each of these labelled subgraphs, we consider all clusters that correspond to it, i.e., clusters whose multiset sum over polymers induces the subgraph with multiplicities given by the edge labels.
We now prove by induction that the number of such clusters for a subgraph with label sum m is at most (eΔ(r-1))^2m. This is clearly true when m=0. Now suppose that the number of such clusters for a subgraph with label sum m is at most (eΔ(r-1))^2m. For a subgraph with label sum m+1, we choose an arbitrary vertex in the subgraph and consider all polymers that contain that vertex. By Lemma <ref>, there are at most (eΔ(r-1))^n such polymers of size n. By removing each polymer from the subgraph and applying the induction hypothesis, we have that the number of clusters in the subgraph is at most
∑_n=1^m+1(eΔ(r-1))^n(eΔ(r-1))^2(m+1-n)≤ (eΔ(r-1))^2(m+1)∑_n=1^m+1(eΔ(r-1))^-n≤ (eΔ(r-1))^2(m+1),
completing the induction. These clusters can be enumerated in time exp(O(m)) by depth-first search, completing the proof.
The Ursell function φ(H) can be computed in time exp(O(H)).
Our proof follows that of Ref. <cit.>. For a connected graph H, we have
φ(H) = 1/H!∑_S ⊆ E(H)
spanning
connected(-1)^S = -(-1)^H/H!T_H(0,1),
where T_H(x,y) denotes the Tutte polynomial of H defined by
T_H(x,y) ∑_S ⊆ E(x-1)^k(S)-k(E)(y-1)^k(S)+S-H.
Here k(S) denotes the number of connected components of the subgraph with edge set S. The Ursell function can then be computed in time exp(O(H)) by evaluating the Tutte polynomial in time exp(O(H)) via an algorithm of Börklund et al. <cit.>. This completes the proof.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r. Further let (𝒞, w, ∼) be an abstract polymer model such that the polymers are connected subgraphs of G and that two polymers γ and γ' are compatible if and only if V(γ) ∩ V(γ')=∅. Suppose that, for all polymers γ∈𝒞, the weight w_γ can be computed in time exp(O(γ)). Then the truncated cluster expansion T_m(Z(𝒞,w)) can be computed in time exp(O(m))·G^O(1).
We can list all clusters of size at most m in time exp(O(m))·G^O(1) by Lemma <ref>. For each of these clusters, we can compute the Ursell function in time exp(O(m)) by Lemma <ref>, and the polymer weights in time exp(O(m)) by assumption. Hence, the truncated cluster expansion T_m(Z(𝒞,w)) can be computed in time exp(O(m))·G^O(1).
Combining Lemma <ref> with Lemma <ref> proves Theorem <ref>.
§.§ Hardness of Approximation
In this section we establish the hardness of approximating abstract polymer model partition functions. In particular, we establish the hardness of approximating the Ising model partition function at imaginary temperature on bounded-degree graphs, which will be useful for our purposes via reductions. This
setting was studied in Ref. <cit.>, which established hardness of approximation results for this problem. We utilise the results of Ref. <cit.> to obtain significantly sharper bounds when the maximum degree is sufficiently large.
We model an Ising system by a multigraph G=(V, E). At each vertex v of G there is a 2-dimensional classical spin space {-1,+1}. The classical spin space on the multihypergraph is given by {-1,+1}^V. An interaction ϕ assigns a real number ϕ(e) to each edge e of G. We are interested in the partition function Z_Ising(G;β) at inverse temperature β, defined by
Z_Ising(G;β) ∑_σ∈{-1,+1}^V∏_{u,v}∈ Ee^-βϕ({u,v})σ_uσ_v.
We shall normalise the partition function by a multiplicative factor of 1/2^G. Further, we shall assume that ϕ(e)≤1 for all e ∈ E, which is always possible by a rescaling of β. We shall consider the case where the inverse temperature β is imaginary, i.e., β=iθ for θ∈ℝ. Our hardness result concerning the approximation of Z_Ising(G;iθ) is as follows.
Fix ϵ>0, Δ∈ℤ_≥3, and θ∈ℝ such that θ≥3π/5(Δ-2). It is #P-hard to approximate the Ising model partition function Z_Ising(G;iθ) up to a multiplicative ϵ-approximation on multigraphs of maximum degree at most Δ.
By Ref. <cit.>, it is #P-hard to approximate the Ising model partition function Z_Ising(G;iθ) up to a multiplicative ϵ-approximation on multigraphs of maximum degree 3 for θ≥π/5>arctan(1/√(2)). For a graph G of maximum degree 3 and a positive integer k∈ℤ^+, let G_k denote the k-thickening of G, that is, the multigraph formed by replacing each edge of G with k parallel edges. Note that the maximum degree of G_k is precisely 3k. Now observe that, for any k∈ℤ^+, we have Z_Ising(G;iθ)=Z_Ising(G_k;iθ/k). Hence, it is #P-hard to approximate Z_Ising(G;iθ) up to a multiplicative ϵ-approximation on multigraphs of maximum degree at most 3k for θ≥π/5k. It follows that it is #P-hard to approximate Z_Ising(G;iθ) up to a multiplicative ϵ-approximation on multigraphs of maximum degree at most Δ for θ≥3π/5(Δ-2), completing the proof.
The proof of Theorem <ref> gives a slightly sharper bound. Further, the proof technique may be applied to the case of complex β.
This offers a significant improvement over Ref. <cit.> when Δ≥7, which applies when θ>arctan(1/√(Δ-1)). In Section <ref> we shall apply Theorem <ref> to establish the hardness of approximation of several quantum problems. We shall now show that the Ising model partition function Z_Ising(G;β) admits an abstract polymer model representation. This is formalised by the following lemma.
The Ising model partition function Z_Ising(G;β) admits the following abstract polymer model representation.
Z_Ising(G;β) = ∑_Γ∈𝒢∏_γ∈Γw_γ,
where
w_γ1/2^γ∑_σ∈{-1,+1}^V(γ)∏_{u,v}∈ E(γ)(e^-βϕ({u,v})σ_uσ_v-1).
By applying Lemma <ref> with f(E)=1/2^G∑_σ∈{-1,+1}^V∏_{u,v}∈ Ee^-βϕ({u,v})σ_uσ_v, we have
Z_Ising(G;β) = 1/2^G∑_σ∈{-1,+1}^V∏_{u,v}∈ Ee^-βϕ({u,v})σ_uσ_v
= 1/2^G∑_S ⊆ E(-1)^S∑_T ⊆ S(-1)^T∑_σ∈{-1,+1}^V∏_{u,v}∈ Te^-βϕ({u,v})σ_uσ_v.
For a subset S ⊆ E, let Γ_S denote the maximally connected components of S. By factorising over these components, we have
Z_Ising(G;β) = ∑_S ⊆ E∏_γ∈Γ_S(-1)^γ∑_T ⊆ E(γ)(-1)^T1/2^γ∑_σ∈{-1,+1}^V(γ)∏_{u,v}∈ Te^-βϕ({u,v})σ_uσ_v
= ∑_S ⊆ E∏_γ∈Γ_S1/2^γ∑_σ∈{-1,+1}^V(γ)∏_{u,v}∈ E(γ)(e^-βϕ({u,v})σ_uσ_v-1)
= ∑_Γ∈𝒢∏_γ∈Γw_γ.
This completes the proof.
We note that Lemma <ref> can be combined with Theorem <ref> to establish an efficient approximation algorithm for Z_Ising(G;β) on graphs of maximum degree at most Δ when β≤1/e^4Δ. Efficient approximation algorithms with significantly sharper bounds have previously been established <cit.>. In particular, Ref. <cit.> established an efficient approximation algorithm that applies when β<π/4(Δ-1). In the case when β is real, the exact point of a computational complexity transition is known under the complexity-theoretic assumption that RP is not equal to NP due to the approximation algorithm of Ref. <cit.> and the hardness of approximation results of Refs. <cit.>.
§ APPLICATIONS
In this section we apply our algorithmic framework to establish efficient approximation algorithms for classes of quantum problems. This includes probability amplitudes, expectation values, partition functions, and thermal expectation values. We apply our hardness of approximation framework to show the optimality of our algorithmic conditions for probability amplitudes and partition functions under complexity-theoretic assumptions. Further, we show that our algorithmic condition is almost optimal for expectation values and optimal for thermal expectation values in the sense of zero freeness.
§.§ Probability Amplitudes
In this section we study the problem of approximating probability amplitudes of quantum circuits. This problem is known to be #P-hard in general <cit.>; however, we show that, for a class of quantum circuits close to the identity, this problem admits an efficient approximation algorithm. Further, we show that this algorithmic condition is optimal under complexity-theoretic assumptions.
We model a quantum circuit by a multihypergraph G=(V, E). At each vertex v of G there is a d-dimensional Hilbert space ℋ_v with d<∞. The Hilbert space on the multihypergraph is given by ℋ_G⊗_v ∈ Vℋ_v. An interaction U assigns a unitary operator U_e on ℋ_e⊗_v ∈ eℋ_v to each edge e of G. We shall assume there is an implicit ordering of the unitary operators given by the edge labels which determines the order in which products of these operators are taken. The quantum circuit on G is defined by U_G∏_e ∈ EU_e. We are interested in the probability amplitude A_U_G, defined by A_U_G0^GU_G0^G. Note that any probability amplitude may be expressed in this form by a simple modification of the circuit. Our algorithmic result concerning the approximation of A_U_G is as follows.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r. Suppose that, for all e ∈ E,
U_e-𝕀≤1/e^3Δr2.
Then the cluster expansion for log(A_U_G) converges absolutely, A_U_G≠0, and there is a fully polynomial-time approximation scheme for A_U_G.
Theorem <ref> also applies to probability amplitudes of the form ψU_Gψ, where |ψ⟩ is a product state over qudits, i.e., |ψ⟩⊗_v ∈ V|ψ_v⟩. Further, Theorem <ref> applies to unitary operators of the form U_e=e^-iθΦ(e), where θ is a real number such that θ≤1/e^4Δr2 and Φ(e) is a self-adjoint operator on ℋ_e with Φ(e)≤1.
We prove Theorem <ref> by showing that the conditions required to apply Theorem <ref> are satisfied. That is, we show that (1) the probability amplitude A_U_G admits a suitable abstract polymer model representation, (2) the polymer weights satisfy the desired bound, and (3) the polymer weights can be computed in the desired time. This is achieved in the following three lemmas.
The probability amplitude A_U_G admits the following abstract polymer model representation.
A_U_G = ∑_Γ∈𝒢∏_γ∈Γw_γ,
where
w_γ0^γ[∏_e ∈ E(γ)(U_e-𝕀)]0^γ.
By applying Lemma <ref> with f(E)=0^G(∏_e ∈ EU_e)0^G, we have
A_U_G = 0^GU_G0^G
= ∑_S ⊆ E(-1)^S∑_T ⊆ S(-1)^T0^G(∏_e ∈ TU_e)0^G.
For a subset S ⊆ E, let Γ_S denote the maximally connected components of S. By factorising over these components, we have
A_U_G = ∑_S ⊆ E∏_γ∈Γ_S(-1)^γ∑_T ⊆ E(γ)(-1)^T0^γ(∏_e ∈ TU_e)0^γ
= ∑_S ⊆ E∏_γ∈Γ_S0^γ[∏_e ∈ E(γ)(U_e-𝕀)]0^γ
= ∑_Γ∈𝒢∏_γ∈Γw_γ.
This completes the proof.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r. Suppose that, for all e ∈ E,
U_e-𝕀≤1/e^3Δr2.
Then, for all polymers γ∈𝒞, the weight w_γ satisfies
w_γ≤(1/e^3Δr2)^γ.
Fix a polymer γ. We have
w_γ≤∏_e ∈ E(γ)U_e-𝕀≤(1/e^3Δr2)^γ,
completing the proof.
The weight w_γ of a polymer γ can be computed in time exp(O(γ)).
The result follows by sparse matrix-vector multiplication.
Combining Theorem <ref> with Lemma <ref>, Lemma <ref>, and Lemma <ref> proves Theorem <ref>. We now show that the algorithmic condition of Theorem <ref> is optimal in the case of multigraphs under complexity-theoretic assumptions. This is achieved by establishing a hardness of approximation result for the probability amplitude A_U_G. For convenience, we shall consider unitary operators of the form U_e=e^-iθΦ(e), where θ is a real number and Φ(e) is a self-adjoint operator on ℋ_e with Φ(e)≤1. Our hardness result concerning the approximation of A_U_G(θ) is as follows.
Fix ϵ>0, Δ∈ℤ_≥3, and θ∈ℝ such that θ≥3π/5(Δ-2). It is #P-hard to approximate the probability amplitude A_U_G(θ) up to a multiplicative ϵ-approximation on multigraphs of maximum degree at most Δ.
Our proof is based on a reduction from the Ising model partition function. We consider quantum circuits on multigraphs with a 2-dimensional Hilbert space at each vertex and unitary operators of the form U_e=e^-iθϕ(e)⊗_v ∈ eX_v, where ϕ(e) is a real number satisfying ϕ(e)≤1. We have
A_U_G(θ) = 0^G(∏_e ∈ Ee^-iθϕ(e)⊗_v ∈ eX_v)0^G
= 1/2^G∑_σ∈{-1,+1}^V∏_{u,v}∈ Ee^-iθϕ({u,v})σ_uσ_v
= 1/2^GZ_Ising(G;iθ).
The proof then follows from Theorem <ref>.
Our results establish a
computational complexity transition from P to #P-hard for the problem of approximating probability amplitudes. A similar transition may be established from P to BQP-hard for additive-error approximations.
§.§ Expectation Values
In this section we study the problem of approximating expectation values of quantum circuits. This problem is known to be #P-hard in general <cit.>; in particular, it is a special case of computing output probabilities of quantum circuits. We show that, for a class of quantum circuits with operators close to the identity, this problem admits an efficient approximation algorithm. This setting was studied in Ref. <cit.>, which established an efficient approximation algorithm for this problem. Our approach offers a simpler and sharper analysis in a more slightly general setting. Further, we show that this algorithmic condition is almost optimal in the sense of zero freeness.
We model a quantum circuit by a multihypergraph G=(V, E) as in Section <ref> and assume that the size of G is at most a polynomial in the order of G. An operator O assigns a self-adjoint operator O_v on ℋ_v to each vertex v of G. The operator O_G on G is defined by O_G∏_v ∈ VO_v. We are interested in the expectation value O_U_G, defined by O_U_G0^GU_G^† O_G U_G0^G.
We now introduce some further definitions that will be useful for our analysis. Let S_E(e)_e ∈ E denote the sequence of edges from G sorted in increasing order with respect to the edge labels. For a vertex v of G, let S_v denote the longest increasing subsequence of S_E such that every prefix induces a connected subgraph of G containing v. We define the causal subgraph C_v of v to be the subgraph of G induced by the sequence S_v. For a subset U of vertices of G, we define the causal subgraph C_U of U to be the subgraph of G induced by the set ⋃_v ∈ UE(C_v). We define the causal intersection hypergraph C(G) of G to be the hypergraph with vertex set V and edge set {V(C_v)}_v ∈ V. We identify the edges of C(G) with the vertices of G. Note that the connected components of a subgraph S of C(G) are in one-to-one correspondence with the connected components of C_E(S). We shall consider polymers that are connected subgraphs of C(G). Our algorithmic result concerning the approximation of O_U_G is as follows.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph such that the causal intersection hypergraph C(G) of G has maximum degree at most Δ and rank at most r. Suppose that, for all v ∈ V,
O_v-𝕀≤1/e^3Δr2.
Then the cluster expansion for log(O_U_G) converges absolutely, O_U_G≠0, and there is a fully polynomial-time approximation scheme for O_U_G.
Theorem <ref> may be extended to a slightly more general class of product operators.
In the case when G corresponds to a quantum circuit U_G of depth at most d with each gate acting on at most k qudits, the causal intersection hypergraph C(G) has maximum degree at most k^d and rank at most k^d. Further, when G is restricted to edges on the lattice graph ℤ^ν, the causal intersection hypergraph C(G) has maximum degree at most (2d)^ν and rank at most (2d)^ν. This implies that our algorithm may be applied to these classes of quantum circuits when O_v-𝕀≤2/e^3k^3d and O_v-𝕀≤2/e^3(2d)^3ν for all v ∈ V, respectively. A more refined analysis in the latter case shows that our algorithm may be applied when O_v-𝕀≤2/e^32^3νd^2ν for all v ∈ V. This offers a significant improvement over Ref. <cit.>, which applies to these classes when O_v-𝕀<1/60k^5d and O_v-𝕀<1/60(16d)^2ν for all v ∈ V, respectively.
We prove Theorem <ref> by showing that the conditions required to apply Theorem <ref> are satisfied. That is, we show that (1) the expectation value O_U_G admits a suitable abstract polymer model representation, (2) the polymer weights satisfy the desired bound, and (3) the polymer weights can be computed in the desired time. This is achieved in the following three lemmas.
The expectation value O_U_G admits the following abstract polymer model representation.
O_U_G = ∑_Γ∈𝒢∏_γ∈Γw_γ,
where
w_γ0^γU_C_E(γ)^†[∏_e ∈ E(γ)(O_e-𝕀)]U_C_E(γ)0^γ.
By applying Lemma <ref> with f(V)=0^GU_G^†(∏_v ∈ VO_v)U_G0^G, we have
O_U_G = 0^GU_G^† O_G U_G0^G
= ∑_S ⊆ V(-1)^S∑_T ⊆ S(-1)^T0^GU_G^†(∏_v ∈ TO_v)U_G0^G
= ∑_S ⊆ E(C(G))(-1)^S∑_T ⊆ S(-1)^T0^GU_G^†(∏_e ∈ TO_e)U_G0^G.
For a subset S ⊆ E(C(G))), let Γ_S denote the maximally connected components of S. By factorising over these components, we have
O_U_G = ∑_S ⊆ E∏_γ∈Γ_S(-1)^γ∑_T ⊆ E(γ)(-1)^T0^γU_G^†(∏_e ∈ TO_e)U_G0^γ
= ∑_S ⊆ E∏_γ∈Γ_S(-1)^γ∑_T ⊆ E(γ)(-1)^T0^γU_C_E(γ)^†(∏_e ∈ TO_e)U_C_E(γ)0^γ
= ∑_S ⊆ E∏_γ∈Γ_S0^γU_C_E(γ)^†[∏_e ∈ E(γ)(O_e-𝕀)]U_C_E(γ)0^γ
= ∑_Γ∈𝒢∏_γ∈Γw_γ.
This completes the proof.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph such that the causal intersection hypergraph C(G) of G has maximum degree at most Δ and rank at most r. Suppose that, for all v ∈ V,
O_v-𝕀≤1/e^3Δr2.
Then, for all polymers γ∈𝒞, the weight w_γ satisfies
w_γ≤(1/e^3Δr2)^γ.
Fix a polymer γ. We have
w_γ≤∏_e ∈ E(γ)O_e-𝕀≤(1/e^3Δr2)^γ,
completing the proof.
The weight w_γ of a polymer γ can be computed in time exp(O(γ)).
The proof follows similarly to that of Lemma <ref>.
Combining Theorem <ref> with Lemma <ref>, Lemma <ref> and Lemma <ref> proves Theorem <ref>. We now show that the algorithmic condition of Theorem <ref> is almost optimal in the sense of the zero freeness of the expectation value. This is achieved by a constructive argument based on an observation of Ref. <cit.> and is formalised by the following theorem.
Fix d∈ℤ^+ and k∈ℤ_≥2. There exists a hypergraph G=(V, E), a quantum circuit U_G of depth d with each gate acting on at most k qubits, and an operator O satisfying O_v-𝕀≤2/k^d for all v ∈ V, such that O_U_G=0.
Let |ψ_n⟩ denote the state |ψ_n⟩1/√(2)(|0^n⟩+|1^n⟩). Note that there is a hypergraph G and a quantum circuit U_G of depth d with each gate acting on at most k qubits such that |ψ_k^d⟩=U_G|0^G⟩. We consider the operator O with O_v=𝕀+itan(π/2k^d)Z_v for all v ∈ V. Then, we have
O_U_G = 0^GU_G^† O_G U_G0^G
= ψ_k^d[∏_v ∈ V(𝕀+itan(π/2k^d)Z_v)]ψ_k^d
= ψ_k^d(∑_S ⊆ V∏_v ∈ Sitan(π/2k^d)Z_v)ψ_k^d
= 1/2∑_S ⊆ V[itan(π/2k^d)]^S[1+(-1)^S]
= 1/2[(1+itan(π/2k^d))^k^d+(1-itan(π/2k^d))^k^d]
= 0.
Further, for all v ∈ V, we have
O_v-𝕀 = tan(π/2k^d)≤2/k^d.
This completes the proof.
The operator in the proof of Theorem <ref> is not self-adjoint.
§.§ Partition Functions
In this section we study the problem of approximating partition functions of quantum spin systems. This problem is known to be #P-hard in general <cit.>; however, we show that, for a class of quantum spin systems at high temperature, this problem admits an efficient approximation algorithm. Efficient approximation algorithms have previously been established for approximating partition functions of quantum spin systems at high temperature <cit.> and for restricted classes at low temperature <cit.>. Our analysis closely follows that of Ref. <cit.> and can be viewed as a straightforward generalisation from the setting of bounded-degree graphs to bounded-degree bounded-rank multihypergraphs. This offers a simpler and slightly sharper analysis than Refs. <cit.>. Further, we show that this algorithmic condition is optimal under complexity-theoretic assumptions.
We model a quantum spin system by a multihypergraph G=(V, E). At each vertex v of G there is a d-dimensional Hilbert space ℋ_v with d<∞. The Hilbert space on the multihypergraph is given by ℋ_G⊗_v ∈ Vℋ_v. An interaction Φ assigns a self-adjoint operator Φ(e) on ℋ_e⊗_v ∈ eℋ_v to each edge e of G. The Hamiltonian on G is defined by H_G∑_e ∈ EΦ(e). We are interested in the partition function Z_G(β) at inverse temperature β, defined by Z_G(β)[e^-β H_G]. We shall assume that the trace is normalised so that (𝕀)=1, which is equivalent to a rescaling the partition function by a multiplicative factor. Further, we shall assume that Φ(e)≤1 for all e ∈ E, which is always possible by a rescaling of β. Our algorithmic result concerning the approximation of Z_G(β) is as follows.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r, and let β be a complex number such that
β≤1/e^4Δr2.
Then the cluster expansion for log(Z_G(β)) converges absolutely, Z_G(β)≠0, and there is a fully polynomial-time approximation scheme for Z_G(β).
Theorem <ref> applies when β is complex, which includes the case of time evolution.
This offers a modest improvement over Ref. <cit.>, which established a quasi-polynomial time algorithm when β≤1/10e^2Δr2 and over Ref. <cit.>, which established a polynomial-time algorithm when β≤1/16e^4Δr2. In the case when G is a bounded-degree graph, we recover the results of Ref. <cit.>.
We prove Theorem <ref> by showing that the conditions required to apply Theorem <ref> are satisfied. That is, we show that (1) the partition function Z_G(β) admits a suitable abstract polymer model representation, (2) the polymer weights satisfy the desired bound, and (3) the polymer weights can be computed in the desired time. This is achieved in the following three lemmas.
The partition function Z_G(β) admits the following abstract polymer model representation.
Z_G(β) = ∑_Γ∈𝒢∏_γ∈Γw_γ,
where
w_γ (-1)^γ∑_T ⊆ E(γ)(-1)^T[e^-β∑_e ∈ TΦ(e)].
By applying Lemma <ref> with f(E)=[e^-β∑_e ∈ EΦ(e)], we have
Z_G(β) = [e^-β H_G]
= ∑_S ⊆ E(-1)^S∑_T ⊆ S(-1)^T[e^-β∑_e ∈ TΦ(e)].
For a subset S ⊆ E, let Γ_S denote the maximally connected components of S. By factorising over these components, we have
Z_G(β) = ∑_S ⊆ E∏_γ∈Γ_S(-1)^γ∑_T ⊆ E(γ)(-1)^T[e^-β∑_e ∈ TΦ(e)]
= ∑_S ⊆ E∏_γ∈Γ_Sw_γ
= ∑_Γ∈𝒢∏_γ∈Γw_γ.
This completes the proof.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r, and let β be a complex number such that
β≤1/e^4Δr2.
Then, for all polymers γ∈𝒞, the weight w_γ satisfies
w_γ≤(1/e^3Δr2)^γ.
Fix a polymer γ. Let P denote the set of all sequences of edges in γ. By the Taylor series,
w_γ≤∑_T ⊆ E(γ)(-1)^Te^-β∑_e ∈ TΦ(e)≤∑_ρ∈ P
supp(ρ)=γβ^ρ/ρ!∏_e ∈ρΦ(e)≤∑_ρ∈ P
supp(ρ)=γβ^ρ/ρ!.
There are precisely {}0ptnγγ! sequences ρ of length n whose support is γ, where {}0ptnγ denotes the Stirling number of the second kind. Hence, we may write
w_γ≤∑_n=γ^∞{}0ptnγγ!/n!β^n = (e^β-1)^γ,
where we have used the identity ∑_n=k^∞{}0ptnkx^n/n!=(e^x-1)^k/k^!. By taking β≤(e^4Δr2)^-1, we have
w_γ≤(1/e^3Δr2)^γ,
completing the proof.
The weight w_γ of a polymer γ can be computed in time exp(O(γ)).
The sum is over all subsets T of E(γ), of which there are 2^γ. For each of these subsets T, the trace may be evaluated in time exp(O(γ)) by diagonalising the sum of interactions.
Combining Theorem <ref> with Lemma <ref>, Lemma <ref>, and Lemma <ref> proves Theorem <ref>. We now show that the algorithmic condition of Theorem <ref> is optimal in the case of multigraphs under complexity-theoretic assumptions. This is achieved by establishing a hardness of approximation result for the partition function Z_G(β) at imaginary temperature, i.e., β=iθ for θ∈ℝ. Our hardness result concerning the approximation of Z_G(iθ) is as follows.
Fix ϵ>0, Δ∈ℤ_≥3, and θ∈ℝ such that θ≥3π/5(Δ-2). It is #P-hard to approximate the partition function Z_G(iθ) up to a multiplicative ϵ-approximation on multigraphs of maximum degree at most Δ.
Our proof is based on a reduction from the Ising model partition function. We consider quantum spin systems on multigraphs with a 2-dimensional Hilbert space at each vertex and self-adjoint operators of the form Φ(e)=ϕ(e)⊗_v ∈ eZ_v, where ϕ(e) is a real number satisfying ϕ(e)≤1. We have
Z_G(iθ) = [∏_e ∈ Ee^-iθϕ(e)⊗_v ∈ eZ_v]
= 1/2^G∑_σ∈{-1,+1}^V∏_{u,v}∈ Ee^-iθϕ({u,v})σ_uσ_v
= 1/2^GZ_Ising(G;iθ).
The proof then follows from Theorem <ref>.
We note that a hardness of approximation result with similar bounds may be obtained for real temperature under the assumption that RP is not equal to NP via the results of Refs. <cit.>. Our results establish a computational complexity transition from P to #P-hard for the problem of approximating partition functions. A similar transition may be established from P to BQP-hard for additive-error approximations.
§.§ Thermal Expectation Values
In this section we study the problem of approximating thermal expectation values of quantum spin systems. This problem is known to be #P-hard in general <cit.>; however, we show that, for a class of quantum spin systems at high temperature with positive-semidefinite operators, this problem admits an efficient approximation algorithm. This setting was studied in Ref. <cit.>, which established an efficient approximation algorithm for this problem. Our approach offers a similar but slightly sharper analysis. Further, we show that this algorithmic condition is optimal in the sense of zero freeness.
We model a quantum spin system by a multihypergraph G=(V, E) as in Section <ref>. An operator Ψ assigns a positive-semidefinite operator Ψ(v) on ℋ_v to each vertex v of G. The operator Ψ_G on G is defined by Ψ_G∏_v ∈ VΨ(v). We are interested in the thermal expectation value Ψ_G(β) at inverse temperature β, defined by Ψ_G(β)Z_G^Ψ(β)/Z_G(β), where Z_G^Ψ(β)[Ψ_Ge^-β H_G]. We shall assume that the positive-semidefinite operators are normalised so that (Ψ_v)=1 for all v ∈ V, which is equivalent to a rescaling of the thermal expectation value by a multiplicative factor. Our algorithmic result concerning the approximation of Ψ_G(β) is as follows.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r, and let β be a complex number such that
β≤1/e^4Δr2.
Then the cluster expansion for log(Ψ_G(β)) converges absolutely, Ψ_G(β)≠0, and there is a fully polynomial-time approximation scheme for Ψ_G(β).
Theorem <ref> applies when β is complex, which includes the case of time evolution.
This offers a modest improvement over Ref. <cit.> when Δ≥4, which established a polynomial-time algorithm when β≤1/2e^2(Δ-1)r(Δ r-r+1). By using the slightly sharper bound given in the remark subsequent to Lemma <ref>, we obtain an improvement when Δ≥3. We note that efficient approximations algorithms may be established when the observable appears in the Hamiltonian under different assumptions.
We prove Theorem <ref> by showing that the conditions required to apply Theorem <ref> are satisfied and then combining this with Theorem <ref>. That is, we show that (1) Z_G^Ψ(β) admits a suitable abstract polymer model representation, (2) the polymer weights satisfy the desired bound, and (3) the polymer weights can be computed in the desired time. This is achieved in the following three lemmas.
Z_G^Ψ(β) admits the following abstract polymer model representation.
Z_G^Ψ(β) = ∑_Γ∈𝒢∏_γ∈Γw_γ,
where
w_γ (-1)^γ∑_T ⊆ E(γ)(-1)^T[Ψ_γ e^-β∑_e ∈ TΦ(e)].
The proof follows similarly to that of Lemma <ref>.
Fix Δ,r∈ℤ_≥2. Let G=(V, E) be a multihypergraph of maximum degree at most Δ and rank at most r, and let β be a complex number such that
β≤1/e^4Δr2.
Then, for all polymers γ∈𝒞, the weight w_γ satisfies
w_γ≤(1/e^3Δr2)^γ.
The proof follows similarly to that of Lemma <ref>.
The weight w_γ of a polymer γ can be computed in time exp(O(γ)).
The sum is over all subsets T of E(γ), of which there are 2^γ. For each of these subsets T, the trace may be evaluated in time exp(O(γ)) by diagonalising the sum of interactions and matrix multiplication.
Combining Theorem <ref> with Lemma <ref>, Lemma <ref>, Lemma <ref>, and Theorem <ref> proves Theorem <ref>. We now show that the algorithmic condition of Theorem <ref> is optimal in the case of multigraphs in the sense of the zero freeness of the thermal expectation value. This is achieved by a straightforward constructive argument and is formalised by the following theorem.
Fix Δ∈ℤ^+. There exists a multigraph G=(V, E) of maximum degree Δ, an operator O, and a self-adjoint operator Φ, such that Ψ_G(β)=0 with β=iπ/Δ.
We consider a quantum spin system on a multigraph comprising a single multiedge with a 2-dimensional Hilbert space at each vertex. Further, we consider the operator Ψ with Ψ(v)=|0⟩⟨0|_v for all v ∈ V and the self-adjoint operator Φ with Φ(e)=1/4(⊗_v ∈ eX_v-⊗_v ∈ eY_v-⊗_v ∈ eZ_v) for all e ∈ E. Then, we have
Ψ_G(β) = [Ψ_Ge^-β H_G]/[e^-β H_G]
= 00e^-iπ/4(X⊗X-Y⊗Y-Z⊗Z)00/[e^-iπ/4(X⊗X-Y⊗Y-Z⊗Z)]
= 0.
This completes the proof.
§ CONCLUSION & OUTLOOK
We have established a general framework for developing approximation algorithms and hardness of approximation results for a class of counting problems. We applied this framework to obtain efficient approximation algorithms and hardness of approximation results for several quantum problems under certain algorithmic conditions.
In particular, we obtained efficient approximation algorithms for (1) approximating probability amplitudes of a class of quantum circuits close to the identity, (2) approximating expectation values of a class of quantum circuits with operators close to the identity, (3) approximating partition functions of a class of quantum spin systems at high temperature, and (4) approximating thermal expectation values of a class of quantum spin systems at high temperature with positive-semidefinite operators. Further, we obtained hardness of approximation results for approximating probability amplitudes of quantum circuits and partition functions of quantum spin systems.
Our results established a computational complexity transition for the problems of approximating probability amplitudes of quantum circuits and partition functions of quantum spin systems and showed that our algorithmic conditions for these problems are optimal under complexity-theoretic assumptions. Finally, we showed that our algorithmic condition is almost optimal for expectation values and optimal for thermal expectation values in the sense of zero freeness.
It would be interesting to identify other quantum problems to which our framework applies. Further, it is an intriguing open problem to identify the exact points of a computational complexity transition for these problems, as is known for the Ising model at real temperature <cit.>. Finally, it would be interesting to obtain algorithms with an improved runtime, for example, via the Markov chain polymer approach of Ref. <cit.>.
§ ACKNOWLEDGEMENTS
We thank Tyler Helmuth for helpful discussions. RLM was supported by the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union's Horizon 2020 Programme (QuantAlgo project), EPSRC grants EP/L021005/1, EP/R043957/1, and EP/T001062/1, and the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), project number CE170100012. RMM was supported by the Additional Funding Programme for Mathematical Sciences, delivered by EPSRC (EP/V521917/1) and the Heilbronn Institute for Mathematical Research. No new data were created during this study.
|
http://arxiv.org/abs/2306.02764v1
|
20230605104053
|
Optimal Market Making in the Chinese Stock Market: A Stochastic Control and Scenario Analysis
|
[
"Shiqi Gong",
"Shuaiqiang Liu",
"Danny D. Sun"
] |
q-fin.PM
|
[
"q-fin.PM"
] |
amss,ucas]Shiqi Gong
[amss]Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Zhongguancun East Road, Beijing 100190, China
[ucas]University of Chinese Academy of Sciences, No.19 Yuquan Road, Beijing 100049, China
[email protected]
tud]Shuaiqiang Liu
[tud]Delft Institute of Applied Mathematics, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands
[email protected]
pcl]Danny D. Suncor1
[pcl]Pengcheng Laboratory, No.2 Xingke 1st Street, Nanshan District, Shen Zhen 518055, Guangdong, China
[cor1]Corresponding author
[email protected]
Market making plays a crucial role in providing liquidity and maintaining stability in financial markets, making it an essential component of well-functioning capital markets. Despite its importance, there is limited research on market making in the Chinese stock market, which is one of the largest and most rapidly growing markets globally. To address this gap, we employ an optimal market making framework with an exponential CARA-type (Constant Absolute Risk Aversion) utility function that accounts for various market conditions, such as price drift, volatility, and stamp duty, and is capable of describing 3 major risks (i.e., inventory, execution and adverse selection risks) in market making practice, and provide an in-depth quantitative and scenario analysis of market making in the Chinese stock market. Our numerical experiments explore the impact of volatility on the market maker's inventory. Furthermore, we find that the stamp duty rate is a critical factor in market making, with a negative impact on both the profit of the market maker and the liquidity of the market. Additionally, our analysis emphasizes the significance of accurately estimating stock drift for managing inventory and adverse selection risks effectively and enhancing profit for the market maker. These findings offer valuable insights for both market makers and policymakers in the Chinese stock market and provide directions for further research in designing effective market making strategies and policies.
Market Making Stochastic Control Inventory Risk Adverse Selection Risk Execution Risk Stamp Tax.
§ INTRODUCTION
In a well-developed stock trading market, the market participants are often categorized into three different types according to the information base and trading purpose <cit.>. The first type is the information traders, who make the trading decision based on the fundamental analysis or private information about the expected future price movement and company growth prospects or to optimally re-balance his or her portfolio to achieve desired risk-return. His or her target price may be different from the current market price and thus may often trade in a directional and aggressive manner. The second type is the noise traders, who lack the information and thus the knowledge of the informed value of the stocks, exhibit more speculative nature and use the noises as the basis for trade. Pure index followers are also noise traders who adjust their positions passively at the index rebalance time. The noise traders as a whole are liquidity traders in the sense that they trade frequently, causing price oscillations and allowing price to be observed. And the third type, the market makers, provides liquidity by submitting simultaneously to the stock exchanges the limit-price buy (bid) and sell (ask) orders, which are queued, according to certain price-time priority rules set by the exchanges, in the limit-order book (LOB) and wait passively for the active counter-party market orders or marketable limit orders to fulfill the trade.
The market makers make a profit from the bid-ask spread while managing the risks by adapting their quotes dynamically. The first common risk a market maker faces and manages is the inventory risk. The inventory risk is determined by the amount of stocks a trader holds or shorts, which are exposed to the price uncertainty due to the market volatility. Averse to the market price volatility, a market maker with a net long inventory will adjust his or her quoting spread by bidding conservatively and asking aggressively in order to reduce his or her probability to buy and increase his or her probability to sell, and vice versa. The second common risk is the execution risk. A market maker will adjust his or her quotes adeptly in order to gain the priority for his or her limit orders to be fulfilled with counter-parties. And the third common risk is the adverse selection risk. The counter-parties with the directional price drift knowledge from news or other information pick up the passive limit orders of the market maker in the direction against the maker. For example, in the case of the mid-quote falling, the limit bid order of the market maker are kept fulfilled by the counter-party, and the re-submissions of the limit bid and ask orders from the maker simply following the price trend passively will keep losing money. In other words, to avoid the adverse selection risk, a market maker may need to identify and forecast the directional drift of the price and set his or her bid and ask prices proactively.
According to WIND^ data, as of December, 2022, with a total value of 73 trillion RMB (10.5 trillion USD dollars) , the Chinese stock market is currently one of the largest stock markets in the world, second only to the US market in size. However, the Chinese market bears distinctive market features and trading regulations compared to the well-developed markets such as the US one. Currently there are three centralized stock exchanges in China, i.e., Shanghai, Shenzhen and Beijing Exchanges. Each stock is exclusively listed and traded in one of the three exchanges, therefore no such rules as National Best Bid Offer (NBBO) exist in China as seen in US. There are no dark pools as trading venues and the short selling is difficult to carry out <cit.>.[Bessler and Vendrasco, by comparing stock markets with and without short-selling in European market, found that the ban of short-selling could lower the trading activities (including liquidity and volumes) and make the bid-ask spread wider <cit.>.] Retail investors dominate the Chinese stock market in terms of the trading volume, unlike developed markets where institutional investors dominates <cit.>. The automated trading, despite rapid growth, is still of a low degree <cit.>. For example, Liu <cit.> found that the stocks with the higher participation of high-frequency trading (HFT) in China manifested higher profitability and the regulation tightening negatively impacted the HFT activities.[The profit advantage of HFT declined gradually after 2019, which may be due to the higher market competition among growing high frequency traders.]
The market regulation, advanced technology and market distinct features shape the trading activities in China, attracting strong research interests in academia and practice alike.
In recent years, the market making practices have been introduced into the secondary capital market in China, and in 2022 the newly formed Beijing Exchange adopted the stock market making with the designated securities firms as the market makers. An effective market making operation usually requires a high-frequency algorithm to submit orders timely and optimally so that the market makers have the opportunities from the frequent transactions to profit from the bid-ask spread and/or obtain commission rebates of the exchange, and at the same time manage the risks. In Chinese stock markets, certain rules and regulations such as the T+1 turnover and the stamp tax on stock selling may hinder the market makers from implementing his or her market making strategies. Thus, facing the growth of the Chinese market making practices, in this paper we are keen to investigate theoretically how the market conditions such as price drift and volatility in normal and stress conditions, and the market rules and regulations such as commission rate and stamp tax, will affect the market maker strategy, e.g., the profit taking and the simultaneous management of the inventory, execution and adverse selection risks, so as to realize the effective market making and liquidity provision.
Various market making models have been proposed and numerical techniques explored in literature.
Avellaneda and Stoikov <cit.> set the optimal market making on the stochastic control base, and with the market price simplified as an exogenous stochastic process, a multi-dimensional Hamilton-Jacobi-Bellman (HJB) equation was established for solving the limit order quotes from the market maker as the controls of his or her cash amount and inventory.
Using the similar setup as <cit.> and using a change of variable technique, Guéant et. al. <cit.> simplified the high-dimensional HJB equation into a linear ordinary differential equation system and solved the market making strategies under inventory constraints.
Its closed-form approximation was applied into practice in certain conditions according to <cit.>.
Cartea et. al. <cit.> proposed a model accounting for arrival of market orders
and its effects on the subsequent arrivals of the external market order flows, the limit order fill rate (thus the LOB shape) and the short-term drift of the mid-quote. The authors concluded that to avoid being adversely selected by informed traders a maker should develop short-term-alpha predictors in his or her market making strategies.
More advanced models, such as stochastic volatility <cit.>, interaction between the limit and market orders <cit.>, were also developed.
Since closed-form solutions to the HJB equation are hardly available in most cases, discretization methods are required to approximate numerically the viscosity solutions (classical solution may not exist). Labahn and Forsyth <cit.> developed a finite difference scheme to numerically solve HJB equations. As a follow-up, Forsyth <cit.> employed the above scheme to obtain a numerical solution to an HJB equation for the optimal trade execution. Other numerical techniques to solve HJB, especially for high dimensional problems, including Monte Carlo simulations <cit.> and deep neural networks <cit.>, were also explored.
The contributions to the literature in our current investigation are in the following four aspects. First, we adapt the theoretical framework for optimal market making based on the stochastic optimal control proposed by <cit.>. [The framework in its general form assumes an exogenous Levy process for mid-quote and an exogenous Markov chain for bid-ask spread, and the bid-ask spread takes multiples of the ticks, capable of describing the discrete-valued nature in real market practice. This framework allows one to seek the optimal market making strategies using the limit and market orders to maximize the terminal cash amount profiting from the bid-ask spread while penalizing the inventory during the course. Numerical experiments with price drift were not carried out in <cit.>.] In our current work, we explicitly study the scenario of the non-zero mid-quote price drift, which is the short-term price movement predicted by the maker, to accommodate the adverse selection risk, in addition to the inventory risk and execution risk. Being aware of the limitation of the linear utility function in terms of price drift accommodation[Two types of utility functions were proposed in <cit.>. The first one was the linear utility function, which only accepted the martingale mid-quote price process, and the numerical solutions were presented. The second one was the CARA utility function, which allowed for a more general drifted diffusion process for the mid-quote.], we take the CARA form of the utility function accordingly, and use a numerical technique to seek the optimal market making strategy solutions in a variety of market and regulation conditions. Second, the market maker is allowed to bid and ask inside and outside the best bid and ask levels by one tick to accommodate the price drift in his or her optimal strategies. Third, based on the parameters calibrated from the Chinese stock market, we solve the optimal quotes a market maker should submit. Fourth, we investigate under the regulation in the Chinese market the effect of the market price parameters such as price drift and volatility, and the effect of regulation rule such as the stamp tax rate, on the market makers profits and the market liquidity provision. This helps simulate the behaviors of the market maker, who would act or not act as an effective market liquidity provider based on his or her risk-return balance.
Our numerical experiments cover three key aspects of market making, including the impact of volatility, the role of stamp duty rate, and the effect of stock drift. The results reveal that volatility exerts an adverse effect on the absolute inventory of the optimal market making strategy, aiming to mitigate inventory risk. Moreover, the stamp duty rate negatively impacts both the profit of the market maker and the liquidity of the market. Also, the total tax paid by the market maker tends to follow a concave function concerning the stamp duty rate. A high stamp duty may decrease the collected stamp tax due to its hinderance to the trading volume from the maker. Additionally, an accurate estimation of stock drift is crucial for the market maker to manage inventory and adverse selection risks. These findings can facilitate market makers in devising their tactics and provide policymakers with insights into modifying market regulations, such as the stamp duty rate.
The remainder of this paper is organized as follows. In Section <ref>, the mathematical model of a market making problem considering the limit and market orders is briefly introduced. More details are listed in Appendix. In Section <ref>, the numerical method to solve the stochastic optimal control problem is described. In Section <ref>, numerical results are presented. In Section <ref> we discuss market makers in China and conclude.
§ MARKET DATA AND MODEL FRAMEWORK
§.§ Market Rules and Stock Data
Currently, there exist three centralized stock exchanges in China, namely, Shanghai, Shenzhen, and Beijing Exchanges. In this paper, we mainly focus on the Shenzhen Stock Exchange due to its high liquidity.
The Shenzhen Stock Exchange (SZSE) operates an electronic order book system under a set of key trading rules that foster a transparent and well-regulated market environment. Trading hours are divided into two sessions: the morning session from 9:30 AM to 11:30 AM and the afternoon session from 1:00 PM to 3:00 PM (China Standard Time). To ensure smooth trading, the minimum tick size δ is set at fixed 0.01 Yuan, representing the smallest price increment for securities. Investors have the flexibility to place various order types, such as limit and market orders, while short-selling is allowed but subject to specific regulations, including the obligation to borrow shares prior to executing a short sale. To maintain market stability, daily price change limits are enforced, typically ± 10% for most stocks and ± 20% for ChiNext Board stocks. Lastly, the SZSE follows a T+1 settlement cycle, in which securities can be sold one business day after the trade date, while funds from selling are instantly available for purchasing other stocks.
Within the SZSE, the transaction fee comprises two components: stamp duty and commission fee. The stamp duty, represented by a percentage ρ, is collected by the government and solely applied when selling stocks. The commission fee, denoted by a percentage ε, is levied by the brokerage and stock exchange during both buying and selling of stocks. In the current SZSE, the stamp duty rate ρ is fixed at 1‰, while the commission fee ε varies by brokerage, with a maximum limit not exceeding 1‰. Typically, for a market maker, the commission fee ranges between 0.1‰ and 0.3‰.
Regarding the data provided by the SZSE, three types of data exist for each stock: snapshot data, order data, and tick data. Snapshot data encompasses the top ten bid/ask price levels along with corresponding volumes at a specified snapshot moment, occurring at a frequency of three seconds. Order data comprises the submission time, type, direction (buy/sell), target volume, and price for each submitted order. Tick data incorporates the trade price, volume, direction (buyer/seller-initiated), type (trade/cancel) of every trade, as well as the two orders culminating in the trade. As high-frequency stock data is required for market making, we reconstruct the order book using order data and tick data and mainly use snapshot data generated at a frequency of 10ms for our study.
§.§ Optimal Market Making Strategy
A limit order is an order to buy (respectively sell) at a specific price or below (respectively above). When a counter-party market order arrives, the exchange will match the market order with the best available price of the limit orders in LOB and the sub-optimal prices in turn if the market order is not fully fulfilled. In our study, a market maker trader is assumed to submit limit orders to provide the liquidity, or post market orders to consume market liquidity for immediate execution. We investigate the effect of the trade-off between execution priority and quote price under different market conditions for the market maker. Therefore, in our current setup, a market maker has three price options to select when placing a limit order: posting to the current best bid (ask) price, posting to one-tick higher bid price (lower ask) price, or posting to one-tick lower bid price (higher ask price), see Figure <ref>. Limit orders with higher bid prices (lower ask prices) have higher priority in order execution for the maker, but come with a worse quote compared to the current best bid (ask price). On the other hand, limit orders with lower bid (higher ask) prices have less risk of adverse selection for the maker, but have lower priority in order execution. In our current setup, the fulfilling of the maker limit orders is described by the filling intensities of his or her limit orders, queueing time of his or her limit orders in LOB is not explicitly modeled. In parallel, in our study, the market orders from the market makers are assumed sufficiently small in volume and immediately fulfilled at the best counter-party price in LOB. No price impact on market movement from the market order execution is explicitly considered herein.
The optimal market making is formulated as a stochastic optimal control problem in our study. We define a filtered probability space (Ω, ℱ,𝔽, ℙ) for stochastic processes, where the filtration 𝔽 = (ℱ_t), t ≥ 0 satisfies the usual conditions. Here trading occurs within a finite time horizon 0<T<∞ (e.g., a trading day).
In this section, we briefly explain our mathematical model setup adapted from <cit.>. More details of the model setup can be found in Appendix <ref>.
§.§.§ State Variables
Four state variables are involved in this approach: the LOB mid-quote, the LOB bid-ask spread, the amount of cash, and the stock inventory of the maker. Table <ref> lists related variables, and we shall provide a concise overview of each, more detailed exhibitions are referred to the Appendix.
* The Exogenous LOB Mid-Quote P_t
The mid-quote P_t of the stock price is assumed to follow a stochastic diffusion process,
P_t = μt + σW_t, 0 ≤ t ≤ T,
where W_t is a standard Brownian motion, μ and σ are constants representing the drift and volatility, respectively.
* The Exogenous LOB Bid-Ask Spread S_t
A continuous-time finite state process S_t ∈𝕊 is proposed to describe the dynamics of the bid-ask spread, where 𝕊=δ𝕀_m, 𝕀_m = {1,⋯,m}, and m∈ℕ^+ is a constant. Two independent processes N_t and Ŝ_n are introduced to model the jump transition of S_t, where N_t is a Poisson process for the cumulative count of random bid-ask spread jumps by time t, and Ŝ_n is a discrete-time Markov chain for the description of spread value after nth jumps. The spread process S_t is characterized by S_t = Ŝ_N_t, t ≥ 0.
* Cash X_t and Inventory Y_t of the Maker
The cash X_t represents the net monetary value of the market maker’s transactions, while the inventory Y_t denotes the net amount of the traded asset that the market maker holds or owes. The dynamics of these state variables are influenced by the execution of limit and market orders fulfilled.
For a limit order as described in Equation (<ref>), upon being filled, the cash X_t and inventory Y_t will be changed correspondingly as
dX_t = -π^b(Q^b_t, P_t^-, S_t^-) L^b_t dN^b_t + π^a(Q^a_t, P_t^-, S_t^-) L^a_t dN^a_t
dY_t = L^b_t dN^b_t - L^a_t dN^a_t ,
where π^b(Q^b_t, P_t^-, S_t^-) and π^a(Q^a_t, P_t^-, S_t^-), defined in Equation (<ref>),(<ref>), represent the bid and ask prices for the limit orders taking into the commission and stamp tax into account, respectively, and L^b_t and L^a_t denote the sizes of the limit buy and sell orders, respectively. The cash X_t and inventory Y_t are updated as the limit orders are filled, as indicated by the independent Cox processes N^b_t and N^a_t, representing the cumulative execution count of limit buy and sell orders from the market maker by time t. The intensities of these processes depend on the bid/ask quotes Q^b_t and Q^a_t, as well as the spread S_t, expressed as λ^b(Q^b_t,S_t) and λ^a(Q^a_t,S_t), respectively.
For market orders as described in Equation (<ref>), we assume that they are filled immediately upon submission at the best available quote in the LOB. Consequently, the cash X_t and inventory Y_t will be changed according to the market order executions as
Y_τ_n = Y_τ^-_n + ζ_n,
X_τ_n = X_τ^-_n - c(ζ_n, P_τ_n, S_τ_n),
where c(ζ_n, P_τ_n, S_τ_n), as defined in Equation (<ref>), represents the amount of cash corresponding to this market order.
§.§.§ Control Variables
The market maker controls his or her the placement of limit and market orders. Now we briefly introduce the control variables as listed in Table <ref>.
* The Limit Order Strategy α^make
The limit order strategy is modeled as a continuous control process,
α_t^make=(Q^b_t,Q^a_t,L^b_t,L^a_t), 0 ≤ t ≤ T,
where L^b_t ∈ [0, l̅] and L_t^a ∈ [0, l̅] represent the size of the buy and sell limit orders with maximum limit order size l̅, and Q^b_t ∈ℚ^b and Q^a_t ∈ℚ^a represent the corresponding bid quote and ask quote, respectively. For example, Figure <ref> illustrates three-level bid quotes and ask quotes, in which ℚ^b = {Bb-, Bb,Bb+} and ℚ^a = {Ba+, Ba,Ba-}. Here, Ba and Bb signify the best ask quote and best bid quote, respectively, while δ denotes one tick size.
* The Market Order Strategy α^take
The market order strategy is modeled as an impulse control,
α^take=(τ_n,ζ_n)_n≥ 0,
where τ_n ∈ [0,T] represents the time at which the market maker places the n^th market order, and ζ_n ∈ [-e̅, e̅] is a random variable representing the size of the n^th market order. Here, e̅ represents the maximum market order size.
§.§.§ Optimal Control Problem
Upon identifying the state and control variables, the optimal trading strategy can be formulated by solving a stochastic optimal control problem. In this section, we present the formulation of this problem and its corresponding solution.
* Problem Setup for the Trading Strategy
The optimal trading strategy aims to maximize the expectation of the terminal wealth while considering risk aversion towards inventory over the trading horizon. The optimal trading strategy, denoted by α = (α^make,α^take), is then determined by solving an optimization problem, given by
max_α𝔼[U(L(X_T, Y_T, P_T, S_T))-γ∫_0^T g(Y_t) d t],
where U is a monotonically increasing reward function, g is a non-negative convex function as inventory penalty, and γ is a non-negative penalty constant for the inventory risk aversion, L(x, y, p, s) represents a liquidation function (e.g. the total amount of cash a trader would obtain if they were to immediately liquidate their entire position at the current market price).
* Value Function
According to Equation (<ref>), the value function for this optimal control problem is given by
v(t,x,y,p,s)=max_α𝔼_t, x,y,p,s[U(L(X_T,Y_T,P_T, S_T))-γ∫_t^T g(Y_u) d u]
where 𝔼_t, x,y,p, s denotes the expected value based on the underlying processes (X,Y,P,S) with initial values (X_t^-,Y_t^-,P_t^-,S_t^-)=(x,y,p,s).
Since that the spread s is discrete and takes values in 𝕊 = δ𝕀_m, the value function for s=iδ can be expressed in a more convenient form as v_i(t,x,y,p)=v(t,x,y,p,iδ) for i ∈𝕀_m.
* Hamilton-Jacobi-Bellman (HJB) Equation
By incorporating limit and market order strategies as controls and employing Itô's lemma, the dynamic programming equation of Equation (<ref>) can be expressed as
min[-∂ v_i/∂ t-max_(q, ℓ) ∈ℚ^b×ℚ^a ×[0, ℓ̅]^2ℒ^q, ℓ v+γ g, v-ℳ v]_i ∈𝕀_m=0,
with the terminal condition v(T, x, y, p, s)=U(L(x, y, p, s)). Here, ℒ^q,l and ℳ denote the infinitesimal generators for limit and market order controls, respectively.
For limit order control, given α_t^make=(q,l) = (q^b,q^a, l^b,l^a),
ℒ^q, ℓ v(t, x, y, p, s) =ℒ_P v(t, x, y, p, s) +R_S(t) v(t, x, y, p, s)
+A^b v(t,x,y,p,s) +A^a v(t,x,y,p,s),
where ℒ_P represents the infinitesimal generator of the mid-quote process P, R_S(t) represents the generator of the continuous-time Markov chain price process S, and A^b, A^a represent the infinitesimal generators of the jump processes caused by the changes in cash and inventory when this limit order occurs. For the explicit expressions of these infinitesimal generators, please refer to Appendix <ref>.
For market order control, the following impulse operator is considered
ℳ v(t, x, y, p, s)=max_e ∈[-e̅, e̅] v(t, x-c(e, p, s), y+e, p, s) .
§ NUMERICAL SCHEME
The solution to the the optimal limit/market order controls of the maker is obtained by a numerical scheme to the HJB equation. Different from the classical finite difference method <cit.> to directly discretize the PDE, our numerical scheme is based on the definition of the infinitesimal generators of encompassed stochastic processes and their expectation forms. In this section we give the sketch of the numerical scheme. Despite leading to the identical numerical scheme as <cit.>, the explicit derivation in this paper is generic and can be used for other similar problems.
§.§ Derivation of the Numerical Scheme
With an equally-spaced partition over the time interval [0, T], time grid points are obtained as 𝕋_n={t_k=k h, k=0, …, n}, where the time step size is h=T/n. Then, for the real-valued function v_i(t,x,y,p) = v(t,x,y,p,s), t∈[0,T], x ∈ℝ, y ∈ℝ, p ∈ℝ^+, s = iδ∈𝕊,
For the derivative of the value function v_i with respect to time, the Euler scheme with the equally-spaced time step h=T/n is written as,
∂ v_i/∂ t = [v_i(t+h, x, y, p)- v_i(t, x, y, p)]/h + o(h)
≈ [v_i(t+h, x, y, p)-v_i(t, x, y, p)]/h
By the definition of the infinitesimal generator, ℒ_p in Equation (<ref>) can be approximated by
ℒ_p v_i(t+h, x, y, p) = {lim_ĥ→ 0^+𝔼[v_i(t+h, x, y, P_t+ĥ^t, p)]-v_i(t+h, x, y, p)}/ĥ
= {𝔼[v_i(t+h, x, y, P_t+ĥ^t, p)]-v_i(t+h, x, y, p)}/ĥ + o(ĥ)
where ĥ stands for a small time interval approaching to zero. Please note that ĥ may be different from time step size h used for the time grid. When choosing ĥ = 4h, Equation (<ref>) becomes
ℒ_p v_i(t+h, x, y, p)
≈{𝔼[v_i(t+h, x, y, P_t+4h^t, p)]-v_i(t+h, x, y, p)}/(4h),
where the expectation is taken on random variable P_t+4h^t, p.
Similarly, we can obtain the approximation of the other three generators as follows
R_S(t) v_i(t+h, x, y, p)
≈{𝔼[v(t+h, x, y, p, S_t+4 h^t, i δ)]-v_i(t+h, x, y, p)}/(4h),
A^b v_i(t+h, x, y, p)
≈ {𝔼[v _ i (t+h, x-π^b(q^b, p, iδ) ℓ^bΔ N_4 h^i, q^b, y+ℓ^bΔ N_4 h^i, q^b, p)] .
-v_i(t+h, x, y, p) }/(4h),
A^a v_i(t+h, x, y, p)
≈ {𝔼[v _ i (t+h, x+π^a(q^a, p,iδ) ℓ^aΔ N_4 h^i, q^a, y-ℓ^aΔ N_4 h^i, q^a, p)]
-v_i(t+h, x, y, p) } / (4h),
where function π^a,π^b and c are defined in Equation (<ref>),(<ref>),(<ref>).
By discretizing Equation (<ref>) and putting together (<ref>) to (<ref>), we arrive at the operators
𝒟_i^h(t, x, y, p, v)=max[𝒯_i^h(t, x, y, p, v), ℳ_i^h(t, x, y, p, v)],
where
𝒯_i^h(t, x, y, p, v)
= -h γ g(y)+1/4{𝔼[v_i(t+h, x, y, P_t+4 h^t, p)]+𝔼[v(t+h, x, y, p, S_t+4 h^t, i δ)].
+max_(q^b, ℓ^b) ∈ℚ^b ×[0, ℓ̅]𝔼[v _ i (t+h, x-π^b(q^b, p, iδ) ℓ^bΔ N_4 h^i, q^b, y+ℓ^bΔ N_4 h^i, q^b, p)]
+max_(q^a, ℓ^a) ∈ℚ^a ×[0, ℓ̅]𝔼[v _ i (t+h, x+π^a(q^a, p,iδ) ℓ^aΔ N_4 h^i, q^a ,
y.-ℓ^aΔ N_4 h^i, q^a, p)]} ,
and
ℳ_i^h(t, x, y, p, v)=max_e ∈[-e̅, e̅] v_i(t +h, x-c(e, p, iδ), y+e, p).
In this formula, P^t,p represents the mid-quote Markov process that starts at time t with price p, while S^t,iδ denotes the spread process that begins at time t with a spread of iδ. Furthermore, Δ N^i,q^b_h indicates the increment of a Poisson process with a rate of λ(q^b, iδ) over the interval [t, t + h]. Similarly, the same applies to Δ N^i,q^a_h.
Subsequently, the Euler-scheme discretized solution with step size h, represented by v^h = (v_i^h)_i ∈𝕀_m, can be determined by solving backward in time using the following formulas
v_i^h(t_n, x, y, p) =U(L_i(x, y, p)),
v_i^h(t_k, x, y, p) =𝒟_i^h(t_k, x, y, p, v^h), k=n-1, ⋯, 0 .
§.§ Convergence of the Numerical Scheme to HJB Equation
We prove the convergence of the scheme below. For 𝒯_i^h(t, x, y, p, v), from the definition of the infinitesimal generator and <cit.>, we have
𝔼[v_i(t+h, x, y, P_t+4 h^t, p)] =v_i(t+h, x, y, p)+4 h ℒ_p v_i+o(h)
𝔼[v_i(t+h, x, y, ρ, S_t+4 h^t, i δ)] =v_i(t+h, x, y, ρ)+4 h R_S(t) v_i+o(h)
𝔼[v_i(t+h, x-π_i^b l^bΔ N_4 h^i, q^b, y+l^bΔ N_4 h^i, q^b, p)] =v_i(t+h, x, y, p)+4 h A^b v_i+o(h)
𝔼[v_i(t+h, x+π_i^a l^aΔ N_4 h^i, q^a, y-l^aΔ N_4 h^i, q^a, p)] =v_i(t+h, x, y, p)+4 h A^a v_i+o(h).
From Equation (<ref>), (<ref>), (<ref>), we have
lim _h → 0v_i(t, x, y, p)-𝒯_i^h(t, x, y, p, v)/h =-∂ v_i/∂ t-max_(q, ℓ) ∈ℚ^b ×ℚ^a ×[0, ℓ̅]^2ℒ^q, ℓ v_i+γ g.
For ℳ_i^h(t, x, y, p, v), it is the direct finite difference form of ℳ_i, and thus
lim _h → 0 [v_i(t, x, y, p) -ℳ_i^h(t, x, y, p, v) ]= v_i-ℳ v_i.
Combining these two equations above, we have
lim _h → 0min[v_i(t, x, y, p)-𝒯_i^h(t, x, y, p, v)/h, v_i(t, x, y, p) -ℳ_i^h(t, x, y, p, v)]
=min[-∂ v_i/∂ t-max_(q, ℓ) ∈ℚ^b ×ℚ^a ×[0, ℓ̅]^2ℒ^q, ℓ v_i+γ g, v_i-ℳ v_i],
which indicates that when h → 0, Equation (<ref>) converges to the solution of the HJB equation.
§ NUMERICAL EXPERIMENTS
In this section, we present the results of our numerical experiments on market making in the Chinese stock market. We begin by discussing the parameter estimation process used to calibrate our model to actual market data. Subsequently, we compare the performance of our baseline strategy under various market conditions, including the impact of volatility, stamp duty, and price drift. Through our analysis and discussions, we provide valuable insights into market making strategies in the Chinese stock market and their implications for market makers and policymakers.
§.§ Parameter Estimation
In this section, we focus on the process of estimating the key parameters using historical stock data from a historical period [0, T_p], as well as present some results obtained from the analysis.
§.§.§ Estimation of Spread and Mid-quote Processes
The spread process S_t = Ŝ_N_t contains two components as discussed in Section <ref>: the discrete-time Markov chain Ŝ_n and the jump process N_t. We estimate the transition matrix P=(ρ_ij) and intensity λ associated with these two components from stock data.
Since the actual value of spread process S_t can be observed from the historical period [0, T_p], we can determine the jump times of the spread process as
θ_0=0, θ_n+1 = inf{t>θ_n : S_t ≠ S_t-}, ∀ n ≥ 1,
where S_t^- = lim_p → t^- S_p. Consequently, we can deduce the actual values of N_t and Ŝ_n from their definitions by
N_t = #{j: 0<θ_j ≤ t}, t ≥ 0,
Ŝ_n = S_θ_n, n ≥ 0,
where #{·} denote the size of the set. Knowing the actual values of these two component processes, we can then estimate their parameters using maximum likelihood estimation (MLE).
For the transition matrix (ρ_ij)_1≤ i,j ≤ m of Ŝ_n, a consistent estimator of its element ρ_ij can be derived by
ρ̂_i j=∑_n=1^N_T_p 1_{(Ŝ_n, Ŝ_n-1)=(j δ, i δ)}/∑_n=1^N_T_p 1_{Ŝ_n-1=i δ}.
Meanwhile, a consistent estimator for the intensity λ of the jump process N_t is
λ̂=N_T/T.
For the estimation of the mid-quote process, assuming R snapshots of P_t are observed at {t_i = iΔ t : i=0, ⋯, R, Δ t = T/R}, consistent estimators for μ and σ^2 are given by
μ̂ = P_T -P_0/T,
σ̂^2 = ∑_n=1^R (P_t_n-P_t_n-1- μ̂)^2/T.
§.§.§ Estimation of Limit Order Execution Processes
Whether the limit order placed by our strategies get fully executed is modeled by two independent Cox processes N_t^b and N_t^a, with intensity modeled by λ^b(Q^b_t,S_t), λ^a(Q^a_t,S_t).
Assuming the trader can observe the execution processes N_t^b,N_t^a in real-time, which represent the number of limit orders at bid quote Q^b_t and ask quote Q^a_t are fully executed , respectively, then the observed trade data can be described as a five-tuple:
(N_t^a, N_t^b, Q_t^a, Q_t^b, S_t) ∈ℝ^+×ℝ^+×ℚ^a ×ℚ^b ×𝕊, t ∈[0, T].
Assuming N_t^a and N_t^b are independent, since their intensity estimating procedures are the same, here we describe the estimation of λ^b. For λ^b(q^b,s), q^b ∈𝒬^b, s∈𝕊, the ratio of the volume of buy orders traded in the system while in state (q^b,s) to the total time spent in that state is a consistent estimator.
The following point process is defined to describe the part of process N_t^b belonging to (q^b,s), that is, the part where the trade quote of limit buy orders is q^b and the spread is s:
N_t^b, q^b, s=∫_0^t1_{Q_u^b=q, S_u-=s }d N_u^b, t ≥ 0.
Similarly, the time that the system state stays in (q^b,s) can be defined as:
𝒯_t^b, q^b, s=∫_0^t1_{Q_u^b=q, S_u-=s}d u.
Therefore, a consistent estimator for λ^b(q^b,s) is:
λ̂^b(q^b,s)=N_T^b, q^b, s/𝒯_T^b, q^b, s.
§.§ Baseline strategy
To comprehensively evaluate the effectiveness of the strategy under varying market regulations and parameters within the Chinese stock market, we initially present a baseline strategy α^baseline by considering an ideal market. For a liquidity provider, it is ideal to have both commission fee rate ε and stamp duty fee rate ρ set to 0%. We also assume that the designated market makers have pre-arrangements in place that make short selling of stocks feasible during market making.
We calibrate the parameters in the model according to Section <ref> using stock Ping An Bank Co. Ltd. (000001.SZ) on August 29, 2019. For the mid-quote process P_t in Equation (<ref>), we assume μ=0 to consider that traders lack information about the stock price trend in the worst-case scenario. From the target stock data, we set P_0 = 14 Yuan; the volatility in seconds σ=0.005; the calibrated transition matrix ρ_ij is presented in Table <ref> and the intensity λ=1 for the spread process S_n; and the calibrated execution intensities for the execution processes N_t^a and N_t^b are illustrated in Figure <ref>.
We consider solving the strategy for the trading period of duration T=300 seconds. The maximum allowable sizes for both limit and market orders are set to l̅ = e̅ = 100, while the inventory is constrained within the interval of [y_min, y_max] = [-500, 500]. To optimize Equation (<ref>), we adopt the CARA utility function U(x,y,p,s) = -exp(-η (x-c(-y, p, s))) with η = 0.5, c(y,p,s) in Equation (<ref>), and set γ = 0 as the optimization target. This utility function is equivalent to that of <cit.>, where the inventory risk is managed in an implicit way. Since the variables and functions are all determined in our optimal control problem, our control strategy α^baseline can be derived by solving the discrete numerical scheme in Equation (<ref>). Here we adopt h = 0.3 seconds as the discrete time step size.
In the backtesting process, we employ the Monte Carlo method to simulate the stock price movement and apply the corresponding strategy. We generate a total of 100,000 paths to represent various scenarios for the stock. For every individual path, we initialize both cash and inventory to be zero at t=0, and then proceed to simulate and monitor their evolution in accordance with the strategy implementation.
In Figure <ref>, we illustrate the evolution of key variables along one path during the backtest, including the mid-quote process P_t, the spread process S_t, the absolute cumulative trading stocks Q_t, and cumulative wealth U_t. Specifically, the absolute cumulative trading stocks Q_t represent the absolute number of shares that the trader has successfully bought or sold through limit or market orders within the time interval [0,t]. The cumulative wealth U_t accounts for the total wealth at time t, encompassing both cash from spread profit and stock value, as expressed by U_t = X_t - c(Y_t,P_t,S_t) according to Equation (<ref>). This demonstrates that with time going on, the market maker uses the optimal strategy to profit from the bid-ask spread.
To establish a benchmark for the performance of the Baseline strategy α^baseline, we introduce the Constant strategy α^constant (e.g., a zero-intelligence agent) for comparison purposes. The Constant strategy involves placing buy and sell limit orders at the best bid and ask prices, respectively, with a maximum limit order size of l̅ at each time step. No market orders are placed except at the final time T. Specifically, the limit order part of this strategy is defined as α^constant, make_t = (Bb, Ba, l̅, l̅), in contrast to Equation (<ref>).
We compare the Baseline and Constant strategies using multiple metrics, as shown in Table <ref>. The Baseline strategy demonstrates two primary advantages over the Constant strategy: higher profit per trade and lower risk per trade. The mean profit, profit per trade, and skewness of the Baseline strategy are notably higher than those of the Constant strategy, indicating that the Baseline strategy generates higher returns on average and exhibits a more positively-skewed distribution of profits. Conversely, the standard deviation, risk per trade, and kurtosis of the Baseline strategy are significantly lower than those of the Constant strategy, suggesting that the Baseline strategy incurs lower volatility in its performance. The lower kurtosis also indicates a more normally distributed profit for the Baseline strategy, implying fewer extreme outcomes compared to the Constant strategy.
Overall, these advantages lead to a substantial improvement in the information ratio, increasing from 0.456 to 3.850, emphasizing the superior risk-adjusted performance of the Baseline strategy. Moreover, Figure <ref>(a) displays the empirical distributions of the profit for both strategies, further illustrating the enhanced performance of the Baseline strategy. The distribution of the Baseline strategy is more concentrated with a higher peak, suggesting a greater probability of achieving profits within a specific range and a more consistent performance compared to the Constant strategy. Figure <ref>(b) shows the empirical distributions of path-wise mean absolute inventory Mean(|Y_t|), computed as the average of the absolute inventory |Y_t| for each path. By leveraging market orders, the Baseline strategy maintains a lower absolute inventory, as seen in Figure <ref>(b), while preserving a similar level of limit orders as the Constant strategy, as indicated in Table <ref>. This ensures consistent profits from limit orders while more effectively managing inventory risk.
§.§ Volatility Impact
Volatility measures the uncertainty of an asset price. The efficacy of the strategy may be affected by the degree in the volatility of the asset. Therefore, it is crucial to understand the impact of volatility on the performance of the strategy. In this section, we shall investigate the sensitivity of the performance with respect to the volatility. Furthermore, we shall assess how the levels of volatility may impact the strategy's inventory.
§.§.§ The Impact of Volatility on Inventory
It is advisable for the strategy to adapt its inventory to varying levels of volatility to minimize inventory risk. To explore the impact of volatility on inventory, we replace the baseline strategy's volatility parameter, σ, with five distinct levels and calculate the corresponding strategy solutions. We then backtest these solutions and plot the curves of the mean and standard variance of the absolute inventory, |Y_t|, against time in Figure <ref>.
The results reveal a negative correlation between absolute inventory and volatility. Specifically, as volatility rises, the probability of significant price fluctuations increases considerably. Consequently, to address the heightened price uncertainty, the strategy adjusts its inventory more frequently to mitigate the risks associated with such movements, resulting in a relatively lower absolute inventory. In contrast, when volatility is low, the strategy can afford to maintain its inventory for extended period of time since price movements are expected to be minor. In this scenario, the absolute inventory tends to be comparatively higher.
§.§.§ Sensitivity w.r.t. Volatility
To assess the strategy's robustness, we subject the baseline strategy (derived for σ=0.005) to backtesting under various levels of perturbed volatility and evaluate its performance. Table <ref> reveals that the mean profit remains relatively stable across a range of volatility levels. Conversely, the standard deviation of profit increases with higher values of σ, leading to a decline in the information ratio. This outcome suggests that the strategy can maintain its performance when volatility is overestimated.
However, underestimating volatility negatively impacts the information ratio. A potential reason for the minor influence of volatility changes on the mean profit is that the model does not account for the correlation between volatility and intensity of execution processes. It is reasonable to assume a relationship between these factors: higher volatility levels might result in more intense trading activities, while lower levels could lead to reduced trading.
§.§ The Impact of Stamp Duty
Stamp duty, a tax on financial transactions such as selling securities, can impact both market makers and government revenue. While it serves as a revenue source for governments, it can also affect market liquidity. In this section, we examine the impact of stamp duty on the Chinese market's liquidity, considering the implications for both market makers and government revenue.
We analyze the strategy under varying stamp duty rates, from 0‰ to 2‰, and different levels of volatility, from 0.003 to 0.007, for each stamp duty rate level. We backtest these strategies and keep track of the mean and standard deviation of profit, total executed volume, and total tax paid. The results are depicted in Figure <ref>.
Figures <ref>(a) and (b) illustrate that the profits of the maker decline almost exponentially as stamp duty rates increase under all volatility levels. This implies that stamp duty represents a crucial determinant of the profits for high-frequency market makers.
Figures <ref>(c) and (d) reveal a significant reduction in the total executed volume with higher stamp duty rates, suggesting that market liquidity is adversely impacted. Regarding government revenue, Figures <ref>(e) and (f) reveal that the total tax paid by the strategy (in other words, collected by the government) initially increases and subsequently decreases as the stamp duty rate rises. Specifically in our experiment, when the stamp duty rate falls within the 0.5‰ to 1‰ range, the total tax paid barely fluctuates. From these results, we notice that with the increase of the stamp tax rate, both the market maker profit and the market liquidity (indicated by the total execution volume of the market making strategy) decrease rapidly, and the collected tax amount decreases with the tax rate in the high tax rate range. Therefore, an proper stamp duty rate should consider the trade-off between tax revenue, market liquidity, and market maker profits, ensuring a balance that benefits both market participants and government objectives.
§.§ Drift Impact
In this section, we explore the impact of drift on market making in the Chinese stock market. Drift, denoted by μ, refers to the expected change in stock price over time and is defined in Equation (<ref>). Accurate forecast of price drift is essential for the success of market making strategies, as it allows market makers to adjust their order strategies effectively and manage inventory and adverse selection risks. Therefore, witht the capacity of the current model framework, we aim to investigate the effect of variations in drift on the strategy and the its resulting performance.
To achieve this, we modify the drift of our baseline strategy while maintaining all other parameters unchanged. We consider three distinct drift levels: μ=-0.001, 0, and +0.001. We then recompute the optimal strategies for each drift level and assess the impact of these drift variations on the strategy. For better understanding, we illustrate the optimal order strategies at time t=0 in Figure <ref>. Following this, we backtest these strategies and present the performance results in Table <ref>.
Figure <ref> demonstrates that when the drift is neutral (μ=0), the order strategy at time t = 0 is well-balanced and symmetric.
With small initial inventory and low bid-ask spread, if the maker set no restriction on himself or herself to post limit orders on the best bid and ask levels, he or she may choose to post outside the best levels in order to gain larger profits from the spread. With the increase of the inventory (say net long), if the spread is large thus good opportunity to profit from the spread, the maker choose to take this spread profit opportunity while lower both bid and ask price by one tick in order to enhance the sell opportunity and reduce the buy opportunity. If the inventory (say net long) is large enough and/or the spread profit can not compensate the inventory risk, the maker will use the market order to get rid of the inventory immediately.
In the case of positive drift (μ=0.001), where the market maker anticipates an upward movement in price, the maker strategy tends to move up bid and ask quotes and favor a long position to capitalize on the expected increase in price. Conversely, with negative drift (μ=-0.001), the strategy leans towards taking a short position, as the market maker predicts a decrease in stock price.
Table <ref> presents the performance on three drift levels. In the neutral drift scenario (μ=0), the mean profit and information ratio are both lower compared to the non-neutral drift scenarios (μ=0.001, -0.001). This suggests that when the drift estimation is accurate and aligned with the actual drift in the market, the strategy can generate higher profits and achieve better risk-adjusted performance. As known by the practitioners and pointed out by various researchers <cit.>, the informed traders has clear target price and usually trade in a directional manner (i.e., price drift), this impose adverse selection risk on the market makers. If the market is unaware the price drift, significant loss could be resulted in. On the contrary, an accurate forecast of the price drift by the maker and order posting in advance along the predicted price will effectively enhance the maker's ability to manage the adverse selection risk and gain more profits.
§ DISCUSSIONS AND CONCLUSIONS
This study investigated market making in the Chinese stock market. By simulating the market maker's behavior under various market conditions, our numerical experiments shed light on several important aspects of market making in this market.
First, our results demonstrate the impact of volatility. Specifically, volatility has direct effects on the market maker's inventory. Higher volatility leads to increased inventory risk, making it necessary for the market maker to maintain lower inventory levels in order to mitigate the risk of holding large positions.
Second, our findings indicate that the stamp duty rate is a critical factor in market making. The stamp duty rate has a negative impact on both the profit of the market maker and the liquidity of the market. Moreover, our analysis reveals that the total tax revenue amount is almost a concave function of the stamp tax rate, suggesting that policymakers can use this relationship to set stamp duty rates in a way that maximizes tax revenue without significantly harming market liquidity or the profitability of market makers.
Third, our study emphasizes the significance of considering the impact of price drift on market making strategies. Accurately estimating the drift is crucial for the market maker to optimize their order strategies and manage inventory effectively. In other words, accurate drift estimation leads to increased profits for the market maker.
While the results of this study have important implications for market makers and policymakers in the Chinese stock market, there are some limitations to our research. For example, we assume the mid-quote P_t follows a continuous-state diffusion process, which may not be an ideal fit for the short-term evolution of the mid-quote due to its discreteness resulting from the minimum tick size. Additionally, we relied on certain assumptions in our model and data, such as the independence between the mid-quote P_t and spread S_t. Despite these limitations, our findings contribute to the current understanding of market making in the Chinese stock market and offer directions for further research in designing effective market making strategies.
§ ACKNOWLEDGEMENTS
The authors wish to thank Pengcheng Laboratory for the support of the quantitative finance research project. Thanks to Jimin Han for his help with data cleaning and the reconstruction of the limit order book.
plain
§ MATHEMATICAL MODELLING OF A MARKET MAKER
§.§ The Limit Order Book (LOB)
As a simplification, the prices of a stock in the LOB are simply modeled as the mid-quote and the bid-ask spread.
The mid-quote P_t of the stock price is assumed to follow an exogenous drifted diffusion process:
dP_t = μ dt + σ d W_t,
where W_t is a standard Brownian motion, μ is the drift and σ is the volatility of the stock price, respectively.
Following <cit.>, we use an exogenous finite-state continuous process S_t to denote the bid-ask spread of the risk asset at time t. With a minimum tick size δ, the spread S_t takes values in 𝕊=δ𝕀_m and jumps at random times, where 𝕀_m = {1,⋯,m} and m∈ℕ^+ is a constant. To model the jump transitions of S_t, two independent processes N_t and Ŝ_n are introduced. N_t is a Poisson process with a deterministic intensity λ(t) to represent the cumulative count of random bid-ask spread jumps by time t. Ŝ_n is a discrete-time Markov chain valued in 𝕊 with a probability transition matrix P[Ŝ_n+1 = jδ | Ŝ_n= iδ] = ρ_ij, 1≤ i,j ≤ m, to represent the transition of spread values. Hence, the spread process S_t is characterized by S_t = Ŝ_N_t, t ≥ 0, which is a continuous-time Markov chain with transition matrix R_S(t) =(r_ij(t))_1≤ i,j ≤ m, where r_ij(t) = λ(t) ρ_ij for i≠ j, and r_ii(t) = - ∑_j≠ i r_ij(t).
The best-bid and best-ask price are defined as P_t^b = P_t - S_t/2 and P_t^a = P_t + S_t/2, and S_t and P_t are assumed independent.
§.§ The Limit Order Strategies of the Market Maker
The limit order strategies of the market maker is modeled as a continuous-time predictable control process as follows:
α_t^make=(Q^b_t,Q^a_t,L^b_t,L^a_t), t≥ 0,
where L^b_t ∈ [0,l̅] and L_t^a ∈ [0,l̅], l̅>0 represent the size of the buy and sell limit orders, and Q^b_t ∈ℚ^b = {Bb_-, Bb, Bb_+} and Q^a_t ∈ℚ^a = {Ba_+, Ba, Ba_-} represent the corresponding bid quote and ask quote, respectively. Here we consider three quote levels for buy and sell orders as shown in Figure <ref>. For the bid quote Q_t^b, Bb denotes the best bid quote P^b_t, Bb_+ denotes the best bid quote plus one tick at P^b_t + δ, and Bb_- denotes the best bid quote minus one tick at P^b_t - δ. For the ask quote Q_t^a, similarly, Ba denotes the best ask quote P^a_t, Ba_- denotes the best ask quote minus one tick at P^a_t - δ, and Bb_+ denotes the best ask quote plus one tick at P^a_t + δ.
We use π^b(Q_t^b, P_t, S_t) and π^a(Q_t^a, P_t, S_t) to represent the limit bid and ask order prices at time t taking commission and stamp tax into consideration, which are written as
π^b(q^b,p,s) =
(p - s/2 - δ)(1+ε) if q^b =Bb_-
(p - s/2 )(1+ε) if q^b =Bb
(p - s/2 + δ)(1+ε) if q^b =Bb_+,
π^a(q^a,p,s) =
(p + s/2 - δ)(1-ε-ρ) if q^a =Ba_-
(p + s/2)(1-ε-ρ) if q^a =Ba
(p + s/2 + δ)(1-ε-ρ) if q^a =Ba_+,
where ε and ρ represent the commission rate and stamp tax rate, respectively.
We assume that the limit buy and sell orders of the market maker in question are small orders and fulfilled by the incoming market orders from the counter-parties. And we further assume that the cumulative execution count of limit buy and sell orders by time t as independent Cox processes N^b_t and N^a_t, whose intensities are only related to the quote and the spread as λ^b(Q^b_t,S_t) and λ^a(Q^a_t,S_t), respectively. According to market characteristics, the following inequalities should hold for all s ∈𝕊: λ^b(Bb_-,s) < λ^b(Bb,s) < λ^b(Bb_+,s) and λ^a(Ba_+,s) < λ^a(Ba,s) < λ^a(Ba_-,s).
Thus, for a limit order strategy α_t^make, the cash amount X and the number of shares Y held by the market maker follow the equations:
dY_t = L^b_t dN^b_t - L^a_t dN^a_t
dX_t = -π^b(Q^b_t, P_t^-, S_t^-) L^b_t dN^b_t + π^a(Q^a_t, P_t^-, S_t^-) L^a_t dN^a_t.
§.§ The Market Order Strategies of the Market Maker
Market orders are used when the trader wishes to execute the trade immediately without waiting. This avoids the non-execution risk of the order at the cost that the transaction may not be fulfilled at a designated or more favored price level. In our study, as a simplification the market orders are assumed sufficiently small in volume and immediately executed at the best price in LOB without a price impact. Thus, the market order strategy is modeled simply as an impulse control:
α^take=(τ_n,ζ_n)_n≥ 0,
where τ_n is a stopping process that represents the time at which the market maker's n^th market order is placed, and ζ_n is a random variable that represents the size of the n^th market order, taking values in the [-e̅,e̅], where e̅ > 0. If ζ_n ≥ 0, then the market order buys ζ_n shares at the current best ask price. If ζ_n < 0, then the market order sells -ζ_n shares at the current best bid price.
The changes in cash and inventory are thus jump processes, and the changes at time τ_n can be described by the following equations:
Y_τ_n = Y_τ^-_n + ζ_n,
X_τ_n = X_τ^-_n - c(ζ_n, P_τ_n, S_τ_n),
where
c(e, p, s) = (e+ ε |e| + ρ |e| · 1_{e<0})p + (|e|+ ε e + ρ e · 1_{e<0})s/2
represents the amount of cash corresponding to an order volume of e, a stock mid-quote of p, and a spread of s, and ε and ρ represent the commission rate and stamp tax rate, respectively.
§.§ Optimal Order Strategies
Following the framework of <cit.>, over a finite time horizon T < ∞, such as within a trading day, the market maker aims to maximize profit from trading through the bid-ask spread, penalize the net inventory during the course, and liquidate the net position at the end time T (without maintaining overnight positions for the avoidance of the inventory risk and overnight capital consumption). The optimal control can therefore be formulated as
max_α=(α^take, α^make)𝔼[U(X_T) -γ∫_0^T g(Y_t)dt],
where the control α must satisfy Y_T=0. U is a monotonically increasing reward function, g is a non-negative, convex function, and γ is a non-negative penalty constant which expresses the view of the market maker towards the inventory risk aversion.
§.§ Value function
The value function v(t,x,y,p,s) represents the maximum expected utility that an investor can obtain by taking a particular control action in a given state at time t. This maximum is taken over all possible control actions α∈𝒜, where 𝒜 denotes the set of all the limit and market order strategies α=(α^take, α^make). In the previous section, it was required that the position be liquidated at the terminal time T, i.e. Y_T=0. In order to remove this requirement, a liquidation function L(x, y, p, s) is introduced, and for a given state characterized by the variables (x,y,p,s), it is defined as:
L(x, y, p, s) =x-c(-y, p, s)
=x+ (y-ε |y| - ρ |y| · 1_{y<0})p-(|y|-ε y - ρ y · 1_{y<0}) s/2 ,
which represents the total amount of cash that an investor would have if they immediately liquidate their entire position at market price.
With the introduction of the liquidation function, the control problem from the previous section can be rewritten as:
max_α=(α^take, α^make)𝔼[U(L(X_T, Y_T, P_T, S_T))-γ∫_0^T g(Y_t) d t]
The value function for this problem can then be defined as:
v(t,x,y,p,s)=max_α∈𝒜𝔼_t, x,y,p,s[U(L(X_T,Y_T,P_T, S_T))-γ∫_t^T g(Y_u) d u]
where 𝔼_t, x,y,p, s denotes the expected value of the process (X,Y,P,S) with initial values (X_t^-,Y_t^-,P_t^-,S_t^-)=(x,y,p,s). Since s is discrete, the value function can be expressed as v_i(t,x,y,p)=v(t,x,y,p,iδ). This control problem is a mixed regular/impulse control problem, and can be solved using the dynamic programming method.
For limit order control, given any q=(q^b,q^a) and l=(l^b,l^a), consider the operator
ℒ^q, ℓ v(t, x, y, p, s) =ℒ_P v(t, x, y, p, s) +R_S(t) v(t, x, y, p, s)
+A^bv(t,x,y,p,s) +A^av(t,x,y,p,s)
where
ℒ_Pv(t,x,y,p,s) = μ∂ v/∂ p(t,x,y,p,s) + σ^2/2∂^2 v/∂ p^2(t,x,y,p,s),
R_S(t) v(t, x, y, p, s) = ∑_j=1^m r_i j(t)[v(t, x, y, p, j δ)-v(t, x, y, p, i δ)],
A^bv(t,x,y,p,s) =λ^b(q^b,s)[v(t,x-π^b(q^b, p, s) ℓ^b, y+ℓ^b,p,s)-v(t,x,y,p,s)],
A^av(t,x,y,p,s) =λ^a(q^a,s)[v(t,x+π^a(q^a, p, s) ℓ^a, y-ℓ^a,p,s)-v(t,x,y,p,s)].
The first term in ℒ^q,l represents the infinitesimal generator of the mid-quote process P, the second term represents the generator of the continuous-time Markov chain price process S, and the last two terms represent the infinitesimal generators of the jump processes caused by the changes in cash and inventory when the limit order (Q_t,L_t)=(q,l) occurs.
For market order control, consider the impulse operator
ℳ v(t, x, y, p, s)=max_e ∈[-e̅, e̅] v(t, x-c(e, p, s), y+e, p, s) .
When combining limit order and market order controls, the dynamic programming equation for this control problem is
min[-∂ v_i/∂ t-max_(q, ℓ) ∈ℚ^b ×ℚ^a ×[0, ℓ̅]^2ℒ^q, ℓ v+γ g, v-ℳ v]=0,
where the terminal condition is
v(T, x, y, p, s)=U(L(x, y, p, s)).
Furthermore, the HJB equation for v_i (1≤ i ≤ m) reads,
min[ -∂ v_i/∂ t-μ∂ v_i/∂ p + σ^2/2∂^2 v_i/∂ p^2-∑_j=1^m r_i j(t)[v_j(t, x, y, p)-v_i(t, x, y, p)]
-max_(q^b, ℓ^b) ∈𝒬_i^b×[0, ℓ̅]λ^b(q^b,iδ)[v_i(t, x-π^b(q^b, p, iδ) ℓ^b, y+ℓ^b, p) -v_i(t, x, y, p)]
-max_(q^a, ℓ^a) ∈𝒬_i^a×[0, ℓ̅]λ^a(q^a,iδ)[v_i(t, x+π^a(q^a, p, iδ) ℓ^a, y-ℓ^a, p)-v_i(t, x, y, p)]
+γ g(y), .v_i(t, x, y, p)-max_e ∈[-e̅, e̅] v_i(t, x-c(e, p, iδ), y+e, p)]=0,
where function π^a,π^b and c are defined in Equation (<ref>),(<ref>),(<ref>).
|
http://arxiv.org/abs/2306.12337v1
|
20230621153609
|
Room temperature optically detected magnetic resonance of single spins in GaN
|
[
"Jialun Luo",
"Yifei Geng",
"Farhan Rana",
"Gregory D. Fuchs"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"quant-ph"
] |
super
Department of Physics, Cornell University
School of Electrical and Computer Engineering, Cornell University
[email protected]
School of Applied and Engineering Physics, Cornell University
Room temperature optically detected magnetic resonance of single spins in GaN
Gregory D. Fuchs
2023 June 20th
=============================================================================
Optically detected magnetic resonance<cit.> (ODMR) is an efficient mechanism to readout the spin of solid-state color centers at room temperature, thus enabling spin-based quantum sensors of magnetic field,<cit.> electric field,<cit.> and temperature<cit.> with high sensitivity and broad commercial applicability. The mechanism of room temperature ODMR is based on spin-dependent relaxation between the optically excited states to the ground states, and thus it is an intrinsic property of a defect center. While the diamond nitrogen-vacancy (NV) center is the most prominent example,<cit.> room temperature ODMR has also been discovered in silicon vacancy centers <cit.>
and divacancy centers<cit.>
in SiC, and recently in boron vacancy center ensembles<cit.> and unidentified single defects<cit.>in hexagonal boron nitride (hBN).
Of these material systems, diamond NV centers are the most technologically important owing to their large (20–30%) ODMR contrast, long spin coherence, high quantum efficiency, and high brightness.<cit.> Unfortunately, diamond as a substrate is far from being technologically mature.
For example, diamond is unavailable with high crystalline quality in large-scale wafers and lacks hetero-epitaxial integration with semiconductors for integrated sensor technologies.
Likewise, boron vacancy centers in hBN have large contrast (up to 20%),<cit.> however, they are available only as small flakes, have low quantum efficiency,<cit.> and lack a visible zero-phonon line at room temperature.<cit.>
Silicon carbide is a technologically mature substrate with recent advances in scalable monolithic integration of color-center-based quantum light sources.<cit.> However, the room temperature ODMR of its defects discovered so far have low contrast (under 1%).<cit.>
In this work we demonstrate bright single defects in GaN that display large ODMR contrast (up to ∼30%). Because GaN is a mature semiconductor with well-developed electronic technologies already developed, this defect platform is promising for integrated quantum sensing applications.
GaN has emerged as a semiconductor of choice for power electronics owing to its wide direct bandgap and high breakdown field.<cit.>
Recently, it has also been found to host bright single photon emitters with spectrally narrow photoluminescence (PL) in the visible spectrum.<cit.> These defect centers have zero phonon linewidths of a few meV at room temperature and less than 1 meV at cryogenic temperature.<cit.>
These excellent optical properties, combined with the engineerability of GaN make these single-photon emitting defects attractive for on-chip photonics and quantum technologies that require single-photon sources. The atomic structure of these defects has not yet been identified.
In this work, we report that GaN single photon emitters possess spin S≥1 and exhibit high-contrast magnetic field dependent PL and optically detected magnetic resonance (ODMR) at room temperature. Our study reveals at least two distinct groups of defects, each with a distinct ODMR spectrum as well as sign of ODMR contrast.
This is promising for sensing applications owing to the high ODMR contrast hosted by a mature semiconductor platform, and it is also promising for unraveling the atomic structure of these defect types by providing critical information about the defect orientation within the crystal, spin multiplicity, and sign of the ODMR response.
Figure <ref>(a-c) detail the typical room temperature optical properties of an isolated GaN defect used in our study. The defects are optically separated on the scale of a few micrometers, enabling photon correlation measurements to ensure we examine single defects. A solid-immersion lens aids in photon collection, with a typical rate of 80 kCounts/s into a 0.9 NA microscope objective when excited with a 532 nm laser with 20 μW power. Defect #2 emits most of its PL into a narrow linewidth centered near 667 nm. As noted previously,<cit.> not all GaN defects share the same emission energy. Additionally, while these defects are mainly photo-stable, like most solid-state single photon emitters, these defects suffer some instabilities, including occasional photo-bleaching. Additional details of the optical properties and photo-stability are discussed in the supplementary information.
A simple method of screening a particular defect for spin-dependent optical properties is measuring its magnetic field dependent PL (magneto-PL).<cit.> Although the specific magneto-PL response depends on the angle of the magnetic field with respect to the defect spin quantization axis, we select the GaN c-axis as a potential direction of high symmetry. The result for five individual defects is shown in Fig. <ref>(d). We immediately notice that the defects fall into two groups of behavior. In the first group, there is a ∼7% dip in PL at low magnetic fields, followed by an increase of PL to saturation (#1 and #5, which we label group I). In the second group, the PL falls monotonically with magnetic field, showing up to a 30% change in PL (#2 – #4, which we label group II).
We proceed under the initial assumption that these GaN defect groups have a spin-dependent PL mechanism similar to that of the diamond nitrogen-vacancy center, in which a spin-state-dependent intersystem crossing can occur from the excited states to a metastable state (Fig. <ref>(e)), which ultimately creates spin-dependent PL contrast. <cit.> It is also possible to obtain spin-dependent PL even if the ground and excited states are spin singlets or doublets if there is spin-dependent relaxation from a S ≥ 1 metastable states (Fig. <ref>(f)).<cit.> In both cases, the precise spin contrast results from a competition between radiative and non-radiative relaxation rates, branching ratios, and optical pumping rates. While the full characterization of the GaN defect optical cycle is beyond the scope of this work, we re-examine some details of the spin-dependent optical cycle below.
Regardless of the specific mechanism of spin-dependent PL contrast, magneto-PL originates from Zeeman-induced spin state mixing between spin states with different average PL rates.
This mechanism is relevant to systems with electronic spin S≥1.
The ground state Hamiltonian of a spin system with S≥1 in a magnetic field B is given by
ℋ = D (S_z^2 - 1/3 S(S+1)) + E (S_x^2-S_y^2) + g μ_B S·B,
where S is the electronic spin operator, g the electronic g-factor, μ_B the Bohr magneton, D and E together the zero-field interaction parameters.
An angle between the external field B and the spin quantization axis introduces off-diagonal large matrix elements between the spin eigenstates, mixing them.
The spin eigenstates can also mix at low fields if E 0.
Returning to the magneto-PL measurements of Fig. <ref>(d), the group-I magneto-PL response suggests a Zeeman-induced spin degeneracy at low magnetic fields, suggesting S≥ 1 with a value of D of only a few hundred megahertz, depending somewhat on the direction of the magnetic field. Additionally, it suggests that optical pumping puts defects in this group into a state with higher PL, while spin mixing reduces the overall PL; a situation similar to diamond NV centers. While group II defects also must have S≥ 1, in contrast to group I defects, the magneto-PL is monotonically decreasing. This could be explained by a very large misalignment angle of the magnetic field with respect to the defect symmetry axis, or by an opposite contrast of PL, where the optically polarized state has lower PL than the spin-mixed state.
Having confirmed that both groups of individual defects have spins with S≥1 and a spin-dependent optical cycle, we study the spin-resonant transitions and spin Hamiltonian by measuring continuous wave (cw-) ODMR.
To study the spin resonance, we continuously drive a microwave magnetic field, optically pump the defect optical transition, and count the emitted PL.
Figure <ref> shows the resulting cw-ODMR traces at B=1 kG for a group-I (#1) and a group-II (#2) defect.
We immediately notice that the two groups have an opposite sign of ODMR contrast, as suggested by the magneto-PL, with group-I defects showing negative cw-ODMR contrast and group-II defects showing positive cw-ODMR contrast.
We also notice that the group-I defect has a modest contrast of ∼2% at this driving power, while the group-II defect has a ∼30% contrast for one of the three resonance features, with smaller contrast for two other features.
The resonances are each well-fit by Lorentzian lineshapes.
A key input for establishing the identity of a new defect is its spin quantization axis. Having discovered a reliable cw-ODMR signal on multiple GaN single defects, we now make the assumption that the cw-ODMR contrast will be largest when we align the external magnetic field along the z-axis defined by Eqn. <ref>.
A misaligned static field will mix the spin eigenstates, which will reduce the cw-ODMR contrast if the fluorescence contrast mechanism is tied to |m_s| as it is for the diamond NV center. To test this we systematically vary the polar angle θ with respect to the c-axis of the crystal, and then the azimuthal angle ϕ, which is measured with respect to the a-lattice vector of GaN.
Fig. <ref>(c) and (d) show the ODMR contrast for defect #1 and #2, respectively, as a function of θ, while the corresponding data as a function of ϕ is shown in Fig. <ref> in the supplementary materials. We find that
the spin quantization axis for the group-I defect #1 forms a ∼27-degree angle with the GaN crystal c-axis, with an in-plane component points along the a-axis.
For the group-II defect #2 we find a spin quantization axis approximately 10 degrees away from the c-axis, and an in-plane component along the a-axis.
Neither spin quantization axis matches a vector between a lattice site and its nearest few neighbors, suggesting the involvement of interstitial atoms (Fig. <ref>(e)-(f)).
Now we study the Zeeman effect on the spin levels. First, we align our set-up so that B⃗ is parallel to the direction of the largest ODMR contrast discussed above and record ODMR as a function of B. Under these conditions we assume B = B_Z from Eqn. <ref>.
Figure <ref> shows the resulting cw-ODMR data from defect #1 (group I) and defect #2 (group II) from 100 G to 1500 G.
The most visible spin resonances disperse with a g-factor g=2, confirming that we study electronic spins.
First focusing on defect #1, we see two transitions of unequal contrast that appear at B ≳ 250 G.
The lack of cw-ODMR contrast at low magnetic fields suggests a mixing between the spin eigenstates that leads to the suppression of spin contrast.
If we assume a minimum spin multiplicity to explain the two transitions, S=1, then this data can be described by Eqn. <ref> with D≈ E≈ 389 MHz. An overlay of the fitted spin transitions is shown in the supplementary materials (Fig.<ref>(a)).
Under these conditions at low magnetic fields, the zero-field spin eigenstates would indeed be strongly mixed, thus suppressing spin-dependent optical contrast. We note, however, that this scenario does not explain why the two transitions have unequal contrast, which may relate to dynamics of the optical cycle that have not been revealed by these measurements. Additionally, we find that the model deviates from the data at the lowest magnetic fields, which may point to other physics not contained in a toy model of a single electronic spin-1. For example, the Ga and N atoms that surround the defect all have a nonzero nuclear spin, which may interact very strongly with this defect and thus potentially explain a deviation from a simple electronic model. Additionally, we note that group-I defects are rare compared to those in group II. While we observed magneto-PL for two defects in this group, one of those stopped being optically active, and thus defect #1 is the only group-I defect that we have been able to record ODMR. More information can be found in the supplementary information.
Next we examine the field-dependent cw-ODMR of defect #2, which has the same cw-ODMR spectrum as all of the group-II defects that we studied. Data for other defects can be found in Fig. <ref> in the supplementary information. This defect shows three spin transitions that disperse with g=2, making spin S = 3/2 a minimal model assuming that there is an ODMR contrast mechanism for all Δ m_s = 1 transitions. Again we note that the three transitions have unequal contrast. The strongest cw-ODMR feature extrapolates to zero frequency at zero field within experimental uncertainty, suggesting that it is due to a transition between |m_s = -1/2⟩ and |m_s = +1/2⟩ in this picture.
In addition to the g=2 resonance, we also see a 4th resonance that disperses with g=4. Additionally, at B∼ 300 G
and f_mw=1.5 GHz, this feature appears to have an avoided-crossing with the highest frequency g=2 spin resonance.
Although a g=4 resonance can be explained by a Δ m_s = 2 spin transition, that scenario does not give rise to an avoided-crossing, suggesting that a toy electronic model based on Eqn. <ref> is insufficient to describe this spin system if the magnetic field is aligned along the symmetry axis. If we ignore the g = 4 resonant line, these transitions are well-described for B>0.5 kG by a S=3/2 model with D = 368 MHz and E = 0.
Finally, we return to the question of whether the spins associated with these defects are in the ground-state and excited-state manifold as in the case of the diamond NV center (Fig. <ref>(e)), or whether they are associated with a metastable state (Fig. <ref>(f)) as in the case of the diamond ST1 defect.<cit.>
To clarify that assignment, we perform both pulsed ODMR and time-resolved single photon counting experiments with separate microwave spin manipulation and optical excitation.
The pulse timings are detailed in the supplementary information.
If we manipulate a ground-state spin, then we expect the pulsed ODMR scheme shown in Fig. <ref>(a) to result in a visible spin resonance. However, if there is no contrast, then we can assign the cw-ODMR response to a metastable spin state.
We start with pulsed ODMR of defect #1 from group I, shown in Fig. <ref>(b). We observe no spin resonance response, with the noise floor of our integration at the level of 0.2%. Comparing this figure to the ∼2% contrast that we observed for cw-ODMR, we conclude that this defect likely has a ground-state/excited-state singlet or a ground-state doublet with no ODMR contrast. Thus, the S≥ 1 spin state that gives rise to cw-ODMR must reside in a metastable state. We confirm our conclusion that ground-state microwave preparation has no impact on the PL using direct time-resolved single photon counting (Fig. <ref>(c)). We see that after turning on the laser, defect #1 has a microsecond-timescale reduction in PL as a function of time, however, we note no difference between the curves generated by a laser pulse alone and a microwave pulse followed by a laser pulse. These data, along with measurements of g^(2) of this defect that shows photon bunching (see the supplementary information), support the existence of a metastable state, and are consistent with the picture of a S≥ 1 metastable state. Further work will be necessary to pin down all the rates in the optical cycle; however, these measurements all point to an optical cycle like that schematically shown in Fig. <ref>(f).
Next we repeat this series of measurements on a member of group II, defect #2, and find the opposite result in Fig. <ref>(d). Here we find visible pulsed ODMR contrast, confirming that the cw-ODMR measurements are the result of a ground-state spin. Interestingly, while the pulsed ODMR contrast is lower than the cw-ODMR owing to different details of the measurement protocol, we see that the same ratio of contrast between the the three Δ m_s = 1 transitions is preserved. This suggests that there are non-trivial spin-dependent intersystem crossing rates, and in particular they are not proportional to m_s as in the case of diamond NV centers. We perform the time-resolved PL measurement of defect #2 as before, with the microwave pulse tuned to the largest-contrast resonance. As expected, we see a noticeably larger initial PL response when we manipulate the ground-state spin before the laser pulse than when we do not, with a contrast lasting for ∼2 μs. While this experiment does not establish all the details of the optical cycle and spin, it is consistent with a level diagram and dynamics as shown in Fig. <ref>(e).
In conclusion, we report high-contrast optically detected spin resonance of GaN single defect spins at room temperature.
We find two distinct defect groups that we categorize based on their magnto-PL and ODMR spectra. They display complex optical cycles and spin resonance behavior that will require further investigation to understand fully, however, this work establishes key facts of these defect groups. The first group has a small negative ODMR contrast, with spin at least S = 1 in its metastable state to explain the experimental results. The second group has a large (up to 30%) positive ODMR contrast, with a complicated ground-state spin Hamiltonian including at least S = 3/2. Additionally, through angle-dependent cw-ODMR measurements, we establish a spin quantization axis in terms of the magnetic field angle with largest ODMR contrast.
The spin quantization axes of both groups do not connect neighboring GaN lattice sites, suggesting the involvement of interstitials. Beyond providing critical new clues to help identify these high performance single photon emitters, our findings are promising as the basis for magnetic sensing technologies using defect fluorescence based a mature optoelectronic semiconductor platform.
§ METHODS
Sample preparation.
We study a GaN sample commercially available from the Xiamen Powerway Advanced Material Co., Limited, China. A 4 μm-thick layer of GaN is grown on a 430 μm-thick sapphire wafer by hydride vapor phase epitaxy (HVPE). The GaN is Fe-doped to make it semi-insulating.
We pre-select GaN defects using our home-built scanning laser confocal microscope.
We check that the PL spectra of the defects are consistent with the ones previously reported<cit.> and verify that they are single photon emitters by measuring the photon auto-correlation g^(2).
GaN is a high-index material with n∼2.4, which leads to a low fraction of PL leaving the material. To enhance photon collection, we use focused-ion-beam milling to carve out a 4 μm-diameter hemisphere-shaped solid-immersion lens (SIL) on the pre-selected defects.
We conduct all measurements at room temperature.
Magneto-PL.
We use a 50.4 mm-diameter 50.4 mm-long cylindrical neodymium iron boron permanent magnet to apply magnetic fields to the sample. To adjust the magnetic field amplitude and direction, we move the magnet on a motorized translation stage, having calibrated the magnetic field against magnet position. The details of the magnet setup are described in the supplementary materials.
Continuous-wave ODMR (cw-ODMR).
To drive spin resonance, a copper microwire is lithographically patterned near the SILs containing the defects of interest.
The details of the microwave set-up are described in the supplementary materials.
We drive about 20 dBm of microwave power to induce the spin resonances and excite the defects with an optical power of 15–20 μW.
Pulsed measurements.
Figure <ref>(a) shows the pulse scheme in a measurement cycle for pulsed ODMR and time-resolved PL measurements.
The details of the timing can be found in the supplementary information Fig. <ref>.
In both schemes, we apply microwaves before we excite the defects for optical readouts, and
we turn off the laser for a sufficient time before we apply microwaves again in the next cycle to allow relaxation from all populations to the ground state.
Supplementary Figure <ref>(a) shows the timings of a cycle of pulsed ODMR measurement. After the optical pulse has been off for 3 μs, the microwave pulse in the next cycle is turned on for 2 μs and off 65 ns before the laser excitation. We read the PL for 2 μs after the microwave turns off and normalize it to the PL registered after the laser has repolarized the system for 8 μs. This sequence is designed to distinguish between a ground-state and a meta-stable state spin.
To measure time-resolved PL, we apply a microwave pulse tuned to the largest contrast resonance frequency for 1 μs before the laser turns on as depicted in Fig. <ref>(b), and we allow the system to relax for 1.5 μs before applying a microwave pulse again in the next cycle.
The optical detection is done by a time-correlated single-photon-counting (TCSPC) module that is triggered by a synchronization pulse when the laser turns on at t_L,on in each pulse cycle.
This way, we record the photon arrival times relative to the laser excitation time. The histogram of photon arrival times gives the time-resolved PL.
§ ACKNOWLEDGEMENTS
We thank Len van Deurzen, Debdeep Jena, and Huili Grace Xing for useful discussions and for supplying the GaN substrates. We thank Brendan McCullian, Nikhil Mathur, Anthony D'Addario, and Johnathon Kuan for very helpful discussions on the physics and microwave experiments. This work was supported by the Cornell Center for Materials Research (CCMR), an NSF Materials Research Science and Engineering Center (DMR-1719875). Preliminary work was supported by the NSF TAQS program (ECCS-1839196). We also acknowledge support through the Cornell Engineering Sprout program. This work was performed in part at the Cornell NanoScale Science & Technology Facility (CNF), a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the NSF (Grant NNCI-1542081)
Supplemental Materials:
Room temperature optically detected magnetic resonance of single spins in GaN
§ MEASUREMENT SETUP
Figure <ref>(a) shows our home-built scanning laser confocal microscope setup and Fig. <ref>(b) the microwave signal chain for driving the spin resonance in both continuous-wave-ODMR (cw-ODMR) and pulsed-ODMR.
We use a focused ion beam to carve out a solid-immersion-lens (SIL) around defects of interest. Using these structures, we are able to excite and collect the PL efficiently with modest laser power, around 15–20 μW.
We apply magnetic fields to the sample using a 50.4-cm-long 50.4-cm-diameter permanent neodymium iron boron magnet mounted on a two-axis translation stage.
To precisely control the magnetic field angle at the defect location, we first align the symmetry axis of the cylindrical magnet to the optical axis of the setup and mount the GaN sample so that its c-axis coincides with the optical axis.
We now can tilt the magnetic field with respect to the sample c-axis by translating the magnet in x-axis as shown in Fig. <ref>. This angle is labeled as the polar angle θ. When measuring the θ dependence of ODMR contrast, we also translate the magnet in z-direction such that the magnetic field norm is maintained within 100 G.
We rotate the sample to change the magnetic field projected onto the c-plane and we label the angle from this projection to the lattice crystal a-axis as the azimuth ϕ.
We calibrate the magnetic field as a function of the magnet position at the sample position with a Hall probe. The resulting measured field distribution matches closely with a calculation of the magnetic field from a cylindrical magnet of these dimensions.
To apply microwave magnetic excitation, we lithographically print a shorted coplanar waveguide on the surface of GaN.
The short consists of a copper wire with a 1 μm-side square cross-section (see Fig. <ref>c). The wire is patterned about 7 μm away from the center of the SILs to avoid the SIL structure.
§ OPTICAL PROPERTIES OF DEFECTS STUDIED
Figure <ref> shows the PL images and the corresponding PL spectra of the defects studied in this work. The linewidths range from 3 nm to 10 nm at room temperature with most of the photons emission in the zero-phonon-line.
Figure <ref> shows the photon auto-correlation g^(2) measurements of defects #1–4. We expect all of the defects that we investigated are single defects. Defects #2–4 all display g^(2)(0) < 0.5, which is strong evidence that they are in fact single photon emitters. The g^(2)(0) of defect #1 does not dip below 0.5. However, we note that the decay constant associated with the central dip of g^(2) is 350 ps, which means that in the presence of the APD time jitter, also about 350 ps, g^(2)(0) is limited by the instrument response, and thus this measurement is consistent with our assignment of defect #1 as a single emitter.
Defect #5 is absent beyond the magneto-PL measurement, because it is no longer optically active through photo-bleaching or some other mechanism.
§ ANGLE DEPENDENT ODMR
Figure <ref>(a) and (b) show the angle dependencies of defect #1.
The ODMR signals are only above measurement sensitivity when θ and ϕ are within a small window.
For defect #2, the dependencies are weaker. We find an ODMR signal maximum when the magnetic field points at θ=10^∘ from the c-axis and when its projection forms a ϕ=60^∘ with an a-axis of the crystal as seen in Fig. <ref>(c) and (d), respectively.
The optimal directions for spin quantization axes are visualized in the main text. Neither connects a lattice site to its nearest few neighbors.
§ CW-ODMR AS A FUNCTION OF MAGNETIC FIELD
Figure <ref> shows cw-ODMR traces as a function of the magnetic field that is aligned with the angle of highest ODMR contrast for each. For these measurements, the magnetic field is re-calibrated in this position, and then moved only along the axial direction to change the magnetic field amplitude but not its direction during the measurement.
These data show two groups of spin resonance responses, as discussed in the main text.
Defect #1 shows positive contrast with only two transition frequencies. Thus we label it group-I because it shares the same magneto-PL with defect #5.
We fit the group-I defect #1 data with a spin-1 model, which results in a pair of commensurate axial and transversal zero field splitting parameters, D≈ E≈ 389 MHz.
The minimal model deviates from the experimental data at low field (B<0.4 kG) where complicated dynamics and spin mixing are not captured by the Hamiltonian discussed in the main text.
Defects #2–4 each show positive ODMR contrast, and have essentially the same spin structures within experimental uncertainties as seen from the fit lines shown in Fig. <ref>(b–d).
The group-II line fit is done to the defect #2 data with a spin-3/2 model, accounting only for the spin resonances that disperse with g=2. The fit results in zero field splitting parameters of E=0 and D=369 MHz. Once we obtained the fit to defect #2 data, we simply overlay it over the data of defects #3 and #4.
We note that they do not all have the same contrast, which could be due to differences in their local environments.
§ TIMINGS FOR PULSED MEASUREMENTS
Figure <ref> shows the details of the pulse timing used in pulsed ODMR and time-resolved PL measurement.
§ PHOTOSTABILITY OF GAN DEFECTS
Some GaN defects are stable for a long time, and some behave differently.
For example, defect #1 has shown the same PL spectra for more than a year, whereas defect #2 has shown discrete PL spectra changes after a few weeks of study as seen in Fig. <ref>(a).
We note defect #2 could change into and out of a particular variant on a timescale of tens of minutes to hours.
Defect #5 photobleached after approximately a few hours of 20 μW laser excitation.
ODMR contrast changes accompany the PL spectra changes observed in defect #2, although the ODMR transition frequencies and Hamiltonian appear not to change.
We are able to capture some of this behavior by monitoring the PL spectra during the ODMR measurements.
We find that the θ dependence of the ODMR signal on the magnetic field does not change significantly, as seen in Fig. <ref>(b,c).
It is possible that the laser excitation traps or removes charges in the defect's local environment, and the charge environment changes the PL spectra but not the spin structure.
|
http://arxiv.org/abs/2306.02305v1
|
20230604090303
|
Information-Theoretic Limits on Compression of Semantic Information
|
[
"Jiancheng Tang",
"Qianqian Yang",
"Zhaoyang Zhang"
] |
cs.IT
|
[
"cs.IT",
"math.IT"
] |
Information-Theoretic Limits on Compression of Semantic Information
Jiancheng Tang, Qianqian Yang2, Zhaoyang Zhang
College of information Science and Electronic Engineering, Zhejiang University, Hangzhou 310007, China
Email: {jianchengtang,qianqianyang202,ning_ming}@zju.edu.cn
*This work is partly supported by the SUTD-ZJU IDEA Grant (SUTD-ZJU (VP) 202102), and partly by the Fundamental Research Funds for the Central Universities under Grant 2021FZZX001-20.
July 31, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================
As conventional communication systems based on classic information theory have closely approached the limits of Shannon channel capacity, semantic communication has been recognized as a key enabling technology for the further improvement of communication performance. However, it is still unsettled on how to represent semantic information and characterise the theoretical limits. In this paper, we consider a semantic source which consists of a set of correlated random variables whose joint probabilistic distribution can be described by a Bayesian network. Then we give the information-theoretic limit on the lossless compression of the semantic source and introduce a low complexity encoding method by exploiting the conditional independence. We further characterise the limits on lossy compression of the semantic source and the corresponding upper and lower bounds of the rate-distortion function. We also investigate the lossy compression of the semantic source with side information at both the encoder and decoder, and obtain the rate distortion function. We prove that the optimal code of the semantic source is the combination of the optimal codes of each conditional independent set given the side information.
Semantic communication, rate distortion, semantic compression.
§ INTRODUCTION
The classical information theory (CIT) established by Shannon in 1948 is the cornerstone of modern communication systems. Concentrating on the accurate symbol transmission while ignoring the semantic content of communications, Shannon defined the information entropy based on the probabilistic distribution of symbols to measure the size of information quantitatively <cit.>, based on which the theoretical limits on source compression and channel capacity are characterised. With the development of digital communications over the past 70 years, existing communication techniques, such as polar code and multiple-input multiple-output (MIMO) systems, have pushed the current communication systems closely approaching the Shannon capacity<cit.><cit.>. To further improve the communication efficiency in order to meet the ever-growing demands, semantic oriented communication has attracted a lot of research interest lately, and widely recognized as a promising approach to overcome the Shannon limits <cit.>.
Different from the traditional communication approaches, semantic communication systems only transmit the semantic or task relevant information while remove the redundancy to improve transmission efficiency<cit.>. Semantic oriented communication methods have been implemented based on deep learning techniques for the efficient transmission of image <cit.>, text <cit.>, video <cit.> and speech signals <cit.>. These methods have been shown to achieve higher transmission efficiency compared with conventional methods for the specific tasks they are designed for. Despite this success, the design of semantic communication system still lacks theoretical guidance.
The research on semantic information theory can date back to about the time when the classical information theory was proposed. In one of a few early works<cit.>, Carnap and Bar Hillel proposed to use propositional logic sentences to represent semantic information. The semantic information entropy is calculated based on logical probabilities <cit.>, instead of statistical probability as in classical information theory. Bao et al.<cit.> further extended this theoretical work and derived the semantic channel capacity of discrete memoryless channel based on propositional logic probabilities. De Luca et al. <cit.> denoted semantic information by fuzzy variable and introduced fuzzy entropy to measure the uncertainty of fuzzy variables. However, neither the propositional logic nor fuzzy variables are expressive enough to describe semantic information of the complex data in today's applications.
Recently, Liu et al. proposed a new source model, where they viewed its semantic information as an intrinsic part of the source that is not observable but can be inferred from the extrinsic state<cit.>. They characterised the defined the semantic rate-distortion function through classical indirect rate-distortion theory based on this source model. Similarly, Guo et al. also modeled the semantic information as the unobservable information in a source, and characterized the theoretic limits on the rate distortion problem with side information <cit.>. In <cit.>, the authors argued that the design of semantic language that maps meaning to messages is essentially a joint source-channel coding problem and characterised the trade-off between the rate and a general distortion measure. These works have shed light on developing a generic theory of semantic communication. However, the inner structure of semantic information remains unexplored.
In this paper, we consider a semantic source as a set of correlated semantic elements whose joint distribution can be modeled by a Bayesian network (BN). We characterise the information-theoretic limits on the lossless compression and lossy compression of semantic sources and derive the lower and upper bounds on the rate-distortion function. We further study the lossy compression problem with side information at both sides and prove that the optimality of compressing each conditionally independent set of variables given the side information. We derive the conditional rate-distortion functions when semantic elements follow binary or Gaussian distributed.
The organization of the rest of the paper is as follows: we introduce the semantic source in Section II. In Section III, we discuss information-theoretic limits on the compression of semantic source. In section IV, we study the problem of lossy compression with two-sided state information. In Section V, we conclude the paper.
§ SEMANTIC SOURCE MODEL AND SEMANTIC COMMUNICATION SYSTEM
In this paper, we assume that a semantic source consists of a set of correlated semantic elements whose joint probabilistic distribution is modeled by a BN. BN has been widely used in semantic analysis and understanding of various types of data <cit.>. For example, Luo et al. proposed a scene classification method of images in which the semantic features are represented by a set of correlated semantic elements <cit.>.
An image and the BN model of its semantic elements are shown in Fig. <ref>(a) and Fig. <ref>(b), where each node in Fig. <ref>(b) represents a semantic element. The conditional dependence relations among the semantic elements are obtained by expert knowledge, and the conditional probability matrices (CPMs) of each node are obtained by using the frequency counting approach based on an image dataset. For example, the semantic features sky and grass are extracted by an object detection algorithm and used as evidences to determine the scene category. In particular, the image is detected as outdoor when the posterior probability of the root node is large than a predefined threshold. In addition to the image procession, BN has also widely applied to various tasks representing semantic relations in different type of data such as text <cit.> and videos <cit.>.
The BN-enabled semantic communication framework is shown in Fig. <ref>, which consists of four phases: a) semantic extraction and representation, b) semantic compression, c) semantic transmission, and d) original data recovery. In this paper, we assume that the conventional information source has been converted into the semantic source by using modern deep learning techniques in semantic extraction and representation phase. Our focus is on the semantic compression phase, where we provide some information-theoretic limits for compressing the semantic source.
§ SEMANTIC COMPRESSION FOR CORRELATED SEMANTIC ELEMENTS
In this section, we present the theoretical limits on lossless and lossy compression of semantic sources, i.e., a set of correlated semantic elements whose correlation are modeled by BNs. We consider a m-variables semantic source { X_1, X_2,...,X_m } whose joint probabilistic distribution is modeled by BN. We assume that the order of m variables is sorted according to their causal relations, i.e., a child node variable always follows its parent node variables.
Theorem 1. (Lossless Compression of Semantic Sources) Give a m-variables source { X_1, X_2,...,X_m } with entropy H( X_1, X_2,...,X_m ). For any code rate R if R > H( X_1, X_2,...,X_m ), there exists a lossless source code for this source.
Proof. The proof of Theorem 1 easily follows from the proof of Shannon's first theorem <cit.>, which is omitted here.
Example 1. Given a set of variables ( X_1,X_2,X_3) whose correlations can be described by a BN as shown in Fig. <ref>. We can find a lossless code to represent the source ( X_1,X_2,X_3) at code rate R if R > - 3plog p - 3( 1 - p)log( 1 - p).
Proof: the entropy of the source ( X_1,X_2,X_3) is given by
H(X_1,X_2,X_3)=H(X_1)+H(X_2| X_1)+H(X_3| X_1).
The term H( X_1) can be computed as
H(X_1)=-p log p-(1-p) log (1-p),
and the conditional entropy can be written as
H( X_2|X_1) = H( X_3|X_1)
= pH( p,1 - p) + ( 1 - p)H( p,1 - p)
= - plog p - ( 1 - p)log( 1 - p).
Therefore, we have
H( X_1,X_2,X_3) = - 3plog p - 3( 1 - p)log( 1 - p).
Remark 1. By utilizing the conditional independence property of BN, the entropy of this source H( X_1, X_2,...,X_m ) can be written as
H(X_1,X_2, … ,X_m) =∑_i = 1^m H( X_i|X_i - 1,...,X_1)
=∑_i = 1^m H( X_i|Parent(X_i)),
where Parent(X_i) denotes all parent variables of X_i. For an m-variables source, the rate of joint coding all the variables
is always less than that of separate compression of each variable. Because
the entropy of m-variables source is given by
H(X_1,X_2,X_3, … ,X_m)
= H(X_1) + H(X_2|X_1) + H(X_3|X_1,X_2) + ...
+ H(X_m|X_n - 1,...,X_1),
while
∑_i = 1^m H( X_i) - H(X_1,X_2, … ,X_m)
= ∑_i = 2^m I( X_i;Parent(X_i))
⩾ 0 .
Remark 2. If we directly compress samples generated by the source { X_1, X_2,...,X_m } with Huffman encoding, the time complexity is 𝒪( k^mlogk^m), where k is the maximal number of states that each variable has. The computation overhead of sorting algorithm is infeasible when the m is large. We can utilize the conditional relations between parents and child variables to significantly reduce the complexity of Huffman coding. specifically, we can iteratively sort and encode the samples starting from the root node. Then for each node, we utilize the conditional probability with regards to its parent nodes to sort and encode its samples. This avoids the time complexity increasing exponentially with the number of nodes. We assume the maximum number of parent variables among { X_1, X_2,...,X_m } is L (L is usually much smaller than m). Then the number of samples required to be coded each time is limited by mk^L. In this way, we have the overall complexity reduced from 𝒪( k^mlogk^m)
to 𝒪( mk^Llogk^L).
Theorem 2. (Lossy Compression of Semantic Sources)
The rate-distortion function for an m-variables semantic source whose joint distribution can be modeled by a BN, and distortion D_1,D_2,...,D_m is given by
R_X_1,X_2,...,X_m(D_1,D_2,...,D_m)
= min _c
p(x̂_1,x̂_2,...,x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_m
I(X_1,X_2,...,X_m;X̂_1,X̂_2,..,X̂_m).
If R > R_X_1,X_2,...,X_m(D_1,D_2,...,D_m), there exists a lossy source code for this m-variables source at rate R for a distortion not exceeding D_1,D_2,...,D_m.
Proof. This can be proved using a straightforward extension of Shannon's work <cit.>.
Lemma 1. For an m-variables semantic source whose joint distribution can be modeled by a BN, the rate-distortion function R_X_1,X_2,...,X_m(D_1,D_2,...,D_m) can be bounded by
∑_i = 1^m R_X_i( D_i) ⩾R_X_1,X_2,...,X_m(D_1,D_2,...,D_m)
⩾∑_i = 1^m R_X_i|Parent(X_i)( D_i),
where R_X_i|Parent(X_i)( D_i) represents the conditional rate-distortion function characterized by
R_X_i|Parent(X_i)( D_i)
= min _c
p(x̂_i|x_i,Parent(x_i))
Ed(x̂_i,x_i) ⩽D_i
I(X_i;X̂_i|Parent(X_i)),
where
Ed(x̂_i,x_i) = ∑_x_i,x̂_i,Parent(x_i),{p(x̂_i|x_i,Parent(x_i))
p(x_i,Parent(x_i))d(x̂_i,x_i)}.
Proof. The upper bound in (<ref>) is a straightforward extension of the upper bound of Wyner and Ziv <cit.>, the proof of which is omitted here. The rigorous proof of the lower bound will be provided in a longer version.
Remark 3. The upper bound in Lemma 1 indicates that the rate of reconstructing all the variables within the given fidelity is always less than that of separate reconstruction of each variable. The rate-distortion of m-variables may be infeasible to obtain when m is large. The lower bound in (<ref>) suggests that we can use the summation of conditional rate distortion to guide the design of lossy source coding instead.
For the upper bound, the mutual information in (<ref>) can be bounded as
I( X_1,...,X_m;X̂_1,...,X̂_m)
= H( X_1,...,X_m) - H( X_1,...,X_m|X̂_1,...,X̂_m)
⩽∑_i = 1^m H( X_i) - H( X_1,...,X_m|X̂_1,...,X̂_m)
= ∑_i = 1^m H( X_i) - ∑_i = 1^m H( X_i|X̂_i)
= ∑_i = 1^m I( X_i;X̂_i) ,
where (<ref>) is obtained according to (<ref>) and (<ref>) is a straightforward extension of Wyner's work <cit.>. Then, we have
R(D_1,D_2,...,D_m)
= min _c
p(x̂_1,x̂_2,...,x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_m
I(X_1,X_2,...,X_m;X̂_1,X̂_2,..,X̂_m)
⩽min _c
p(x̂_1,x̂_2,...,x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_m∑_i = 1^m I( X_i;X̂_i)
= ∑_i = 1^m R_X_i( D_i).
§ LOSSY COMPRESSION OF CORRELATED SEMANTIC ELEMENTS WITH SIDE INFORMATION
In semantic communications, the sender and receiver always have access to some background knowledge about the communication contents. This background knowledge can be used as side information to help the compression of intended messages. In this section, we study the compression of correlated semantic elements when side information exists at both the sender and receiver. We further evaluate the corresponding rate-distortion function when the semantic elements follow binary distribution and multi-dimensional Gaussian distribution respectively.
Theorem 3. (Compression With Side Information)
Given the bounded distortion measure (d_1: 𝒳_1 ×𝒳̂_1 →ℛ^+,..., d_m: 𝒳_m ×𝒳̂_m →ℛ^+), where ℛ^+ denotes the set of nonnegative real numbers.
If some variable is observed and revealed to the encoder and decoder as side information, denoted by Y, the rate-distortion function for compressing the remaining variables X_1, X_2,...,X_m is given by
R_X_1,...,X_m|Y(D_1,...,D_m)
= min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m
I(X_1,...,X_m;X̂_1,..,X̂_m|Y).
Proof: We first prove the achievability of Theorem 3 by showing that for any rate R ≥R_X_1,...,X_m|Y(D_1,...,D_m), there exists a lossy source code with the rate R and asymptotic distortion (D_1,...,D_m). Let p(x̂_1,...,x̂_m|x_1,...,x_m,y) be the conditional probability that achieves equality in (<ref>) and satisfies the distortion requirements, i.e., Ed(x̂_1,x_1) ⩽D_1,…, Ed(x̂_m,x_m) ⩽D_m.
Generation of codebook: Randomly generate a codebook 𝒞 with the help of side information Y. The codebook 𝒞 consists of 2^nR sequence triples (x̂_1,...,x̂_m)^n drawn i.i.d. according to p(x̂_1,...,x̂_m|y), where p(x̂_1,...,x̂_m|y) = ∑_x_1,...,x_mp(x_1,...,x_m|y)p(x̂_1,...,x̂_m|x_1,...,x_m,y). These codewords are indexed by w ∈{1,2,...,2^nR}. The codebook 𝒞 is revealed to both the encoder and decoder.
Encoding and Decoding: Encode the observing (x_1,...,x_m,y)^n by w if its indexing sequence (x̂_1,...,x̂_m)^n is distortion typical with (x_1,...,x_m,y)^n, i.e., (x_1,...,x_m,y,x̂_1,...,x̂_m)^n ∈ T_ϵ^n. If there is more then one such index w, choose the least. If there is no such index, let w=1. After obtaining the index w, the receiver chooses the codeword (x̂_1,...,x̂_m)^n indexed by w to reproduce the sequence.
Calculation of distortion: For an arbitrary codebook 𝒞 and any ϵ > 0, the sequences (x_1,...,x_m)^n ∈ (X_1,...,X_m)^n can be divided into to two categories:
For one case. Sequences (x_1,...,x_m,y)^n that is distortion typical with a codeword (x̂_1,...,x̂_m)^n in the codebook 𝒞, i.e., d(x̂_1,x_1) < D_1+ϵ,..., d(x̂_m,x_m) < D_m+ϵ. Because the total occurrence probability of such sequence is less than 1, the expected distortions contributed by these sequences are no more than (D_1+ϵ,..., D_m+ϵ).
For the second case. Sequences (x_1,...,x_m,y)^n that there is no codeword in the codebook 𝒞 that is distortion typical with (x_1,...,x_m)^n. The total occurrence probability of such sequences is denoted by P_e. Since the distortions for (x_1,...,x_m)^n can be bounded by (d_max,1,...,d_max,m), the expected distortions contributed by these sequences are no more than (P_ed_max,1,...,P_ed_max,m), where the bounded distortion measure d_max is defined by
d_max, i def =max _x_i ∈𝒳_i, x̂_i ∈𝒳̂_i d(x_1, x̂_i)<∞ .
Hence the total distortions can be bound as
Ed(x̂_1,x_1) ⩽D_1 + ϵ+ P_ed_max,1,
...
Ed(x̂_m,x_m) ⩽D_m + ϵ+ P_ed_max,m.
If P_e is small enough, the expected distortions are closed to (D_1,..., D_m).
An error event occurs if there is no codeword that is distortion typical with the source message (x_1,...,x_m,y)^n as
ε = {( (X_1,...,X_m,Y)^n,(X̂_1,...,X̂_m)^n) ∉ T_ϵ^n, ∀ w ∈[ 1:2^nR] }.
The bound of P_e: The coding error probability can be bounded as
P_e = ∑_(x_1,...,x_m,y)^np( (x_1,...,x_m,y)^n )
· p{((x_1,...,x_m,y)^n,(X̂_1,...,X̂_m)^n_w) ∉ T_ϵ^n,∀ w ∈[ 1:2^nR]}
⩽ p{∏_w = 1^2^nR((x_1,...,x_m,y)^n,(X̂_1,...,X̂_m)^n_w) ∉ T_ϵ^n}
= ( 1 - p{( (x_1,...,x_m,y,X̂_1,...,X̂_m)^n ∈ T_ϵ^n)})^2^nR
⩽( 1 - 2^ - n[ I(X_1,...,X_n;X̂_1,...,X̂_n|Y) + δ (ϵ)])^2^nR,
where (<ref>) is obtained by applying the joint typicality theorem in <cit.>, (<ref>) follows from the fact that p( (x_1,...,x_m,y)^n ) is at most 1, and (<ref>) and (<ref>) are obtained though the property of joint typical sequence. We note that ( 1-z )^t⩽ e^( -tz ) for z∈[ 0,1 ] and 0⩽t, and (<ref>) can be rewritten as
P_e ⩽exp( 2^ - n[ R - I(X_1,...,X_n;X̂_1,...,X̂_n|Y) - δ (ϵ)]) ,
where δ (ϵ) → 0 when n →∞. We note that P_e goes to zero with n if R > I(X_1,...,X_n;X̂_1,...,X̂_n|Y) + δ (ϵ). This proves the rate-distortion pairs (R, D_1,...,D_m ) is achievable if R > R(D_1,...,D_m ).
We then prove the converse of Theorem 3 by showing that for any source code meeting the distortion requirements (D_1,...,D_m), then the rate R of the code must satisfy R ≥R_X_1,...,X_m|Y(D_1,...,D_m). We consider any (n, 2^nR) code with an encoding function f_n: (𝒳_1,...,𝒳_m, 𝒴)^n →{ 1,2,...,2^nR}. Then we have
nR ⩾ H( f_n( (X_1,...,X_m,Y)^n ))
⩾ H( f_n( (X_1,...,X_m,Y)^n )|Y^n)
⩾ H( f_n( (X_1,...,X_m,Y)^n )|Y^n)
- H( f_n( (X_1,...,X_m,Y)^n )|(X_1,...,X_m,Y)^n)
⩾ I( (X_1,...,X_m)^n;(X̂_1,...,X̂_m)^n|Y^n)
= I( (X_1,...,X_m,Y)^n;(X̂_1,...,X̂_m)^n)
- I( Y^n;(X̂_1,...,X̂_m)^n) ,
where (<ref>) follows from the fact that the number of codewords is 2^nR, (<ref>) is obtained by the fact that conditioning reduces entropy, (<ref>) is obtained by introducing a nonnegative term, (<ref>) follows from the property of data-processing, and (<ref>) follows from the property of conditional mutual information. By applying the chain rule of mutual information to (<ref>), we have
nR
⩾∑_i = 1^n I( X_1,i,...,X_m,i,Y_i;(X̂_1,...,X̂_m)^n|X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n I( Y_i;(X̂_1,...,X̂_m)^n|Y^i - 1)
=
∑_i = 1^n H( X_1,i,...,X_m,i,Y_i|X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n H( X_1,i,...,X_m,i,Y_i|(X̂_1,...,X̂_m)^n,X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n H( Y_i|Y^i - 1) + ∑_i = 1^n H( Y_i|(X̂_1,...,X̂_m)^n,Y^i - 1)
= ∑_i = 1^n H( X_1,i,...,X_m,i,Y_i) - ∑_i = 1^n H( X_1,i,...,X_m,i,Y_i|(X̂_1,...,X̂_m)^n)
- ∑_i = 1^n H( Y_i) + ∑_i = 1^n H( Y_i|(X̂_1,...,X̂_m)^n)
= ∑_i = 1^n H( X_1,i,...,X_m,i,Y_i) - ∑_i = 1^n H( Y_i)
- ∑_i = 1^n H( Y_i|(X̂_1,...,X̂_m)^n) + ∑_i = 1^n H( Y_i|(X̂_1,...,X̂_m)^n)
- ∑_i = 1^n H( X_1,i,...,X_m,i|Y_i,(X̂_1,...,X̂_m)^n)
⩾∑_i = 1^n H( X_1,i,...,X_m,i|Y_i)
- ∑_i = 1^n H( X_1,i,...,X_m,i|X̂_1,i^,...,X̂_m,i^,Y_i)
= ∑_i = 1^n I( X_1,i,...,X_m,i;X̂_1,i^,...,X̂_m,i^|Y_i)
⩾∑_i = 1^n R( Ed_1( X_1,i,X̂_1,i^),...,Ed_m( X_m,i,X̂_m,i^))
⩾ nR( Ed_1( X_1^n,X̂_1^n),...,Ed_m( X_m,i^n,X̂_m^n))
⩾ nR( D_1,...,D_m),
nR
⩾∑_i = 1^n I( X_1,i,...,X_m,i,Y_i;(X̂_1,...,X̂_m)^n|X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n I( Y_i;(X̂_1,...,X̂_m)^n|Y^i - 1)
=
∑_i = 1^n H( X_1,i,...,X_m,i,Y_i|X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n H( X_1,i,...,X_m,i,Y_i|(X̂_1,...,X̂_m)^n,X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n H( Y_i|Y^i - 1) + ∑_i = 1^n H( Y_i|(X̂_1,...,X̂_m)^n,Y^i - 1)
= ∑_i = 1^n H( X_1,i,...,X_m,i,Y_i) - ∑_i = 1^n H( Y_i)
- ∑_i = 1^n H( Y_i|(X̂_1,...,X̂_m)^n) + ∑_i = 1^n H( Y_i|(X̂_1,...,X̂_m)^n)
- ∑_i = 1^n H( X_1,i,...,X_m,i|Y_i,(X̂_1,...,X̂_m)^n)
⩾∑_i = 1^n H( X_1,i,...,X_m,i|Y_i)
- ∑_i = 1^n H( X_1,i,...,X_m,i|X̂_1,i^,...,X̂_m,i^,Y_i)
= ∑_i = 1^n I( X_1,i,...,X_m,i;X̂_1,i^,...,X̂_m,i^|Y_i)
⩾∑_i = 1^n R( Ed_1( X_1,i,X̂_1,i^),...,Ed_m( X_m,i,X̂_m,i^))
⩾ nR( Ed_1( X_1^n,X̂_1^n),...,Ed_m( X_m,i^n,X̂_m^n))
⩾ nR( D_1,...,D_m),
where X_j^i - 1 denotes the sequence (X_j,1,...,X_j,i-1), (<ref>) follows from the definition of conditional mutual information. (<ref>) follows from the chain rule and the fact that the source is memoryless, i.e., (X_1,i,...,X_m,i,Y_i) and (X_1^i - 1,...,X_m^i - 1,Y^i - 1) are independent. (<ref>) is obtained by the fact that conditioning reduces entropy. And (<ref>) follows from the definition of R(D_1,...,D_m). This proves the converse of Theorem 3.
Lemma 2.
Given the known variables Y, if an m-variables source can be divided into several conditional independent subsets 𝒱_1,...𝒱_l by the property of BN, then
R_𝒱_1,...,𝒱_l|Y(D_1,...,D_m) = ∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i) ,
where the term R_𝒱_i|Y(D_j, j∈𝒱_i) is given by:
R_𝒱_i|Y(D_j, j∈𝒱_i) = min _c
p(v̂_i|v_i,y):
Ed_j(x̂_j,x_j) ⩽D_j,j∈𝒱_i
I(𝒱_i;𝒱̂_i|Y).
Proof: The proof of Lemma 2 will be provided in a longer version.
Remark 4. Lemma 2 implies that if a set of semantic elements can be divided into several conditional independent subsets by using the property of BN with side information Y, compressing the source variable set jointly is the same as compressing these conditional independent subsets separately in terms of the distortions and rate. We note that the separate compression of conditional independent subsets can significantly reduce the complexity of coding.
Example 1.
Consider two different sources as shown in Fig. <ref>. The characteristics of BN indicate that the variables X_1 and X_2 in both cases of Fig. <ref> are conditional independent given Y. By Lemma 2, we have that if the variable Y is revealed to the encoder and decoder as side information, then
R( D_1, D_2 ) = min _c
p(x̂_1,x̂_2|x_1,x_2,y)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
I(X_1,X_2;X̂_1,X̂_2|Y)
= R_X_1|Y(D_1)+R_X_2|Y(D_2).
where the term R_X_1|Y(D_1) is given by:
R_X_1|Y(D_1) = min _c
p(x̂_1|x_1,y):
Ed_1(x̂_1,x_1) ⩽D_1
I(X_1;X̂_1|Y)
and the term R_X_2|Y(D_2) is given by:
R_X_2|Y(D_2) = min _c
p(x̂_2|x_2,y):
Ed_2(x̂_2,x_2) ⩽D_2
I(X_2;X̂_2|Y)
H( X_2| X_1, Y ) = H(X_2| Y),
H( X_2, X_1 | Y ) =H(X_1| Y)+ H(X_2| Y).
We can bound I(X_1,X_2;X̂_1,X̂_2|Y) by
I(X_1,X_2;X̂_1,X̂_2|Y)
= H(X_1,X_2|Y)-H(X_1,X_2|X̂_1,X̂_2,Y)
= H(X_1|Y)+H(X_2|Y)-H(X_1|X̂_1,X̂_2,Y)
-H(X_2|X_1,X̂_1,X̂_2,Y)
⩾ H(X_1|Y)+H(X_2|Y)-H(X_1|X̂_1,Y)
-H(X_2|X̂_2,Y)
= I(X_1;X̂_1|Y) +I(X_2;X̂_2|Y).
Then we have
R( D_1, D_2 ) = min _c
p(x̂_1,x̂_2|x_1,x_2,y)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
I(X_1,X_2;X̂_1,X̂_2|Y)
⩾min _c
p(x̂_1|x_1,y):
Ed_1(x̂_1,x_1) ⩽D_1
I(X_1;X̂_1|Y)
+ min _c
p(x̂_2|x_2,y):
Ed_2(x̂_2,x_2) ⩽D_2
I(X_2;X̂_2|Y)
= R_X_1|Y(D_1)+R_X_2|Y(D_2).
We now prove the achievability of the rate-distortion tuple (R_X_1|Y(D_1)+R_X_2|Y(D_2),D_1,D_2). We assume the optimal distributions that achieve the rate-distortion tuples (R_X_1|Y(D_1)) and R_X_2|Y(D_2) are p^*(x̂_1|x_1,y) and p^*(x̂_2|x_2,y), respectively. We consider the case that (X_1,X̂_1) and (X_2,X̂_2) are conditional independent given Y or these variables form a Markov chain (X_1,X̂_1)-Y-(X_2,X̂_2), then we have
I(X_1,X_2;X̂_1,X̂_2|Y)
= H(X_1,X_2|Y)-H(X_1,X_2|X̂_1,X̂_2,Y)
= H(X_1|Y)+H(X_2|Y)-H(X_1|X̂_1,X̂_2,Y)
-H(X_2|X_1,X̂_1,X̂_2,Y)
= H(X_1|Y)+H(X_2|Y)-H(X_1|X̂_1,Y)
-H(X_2|X̂_2,Y)
= I(X_1;X̂_1|Y) +I(X_2;X̂_2|Y)
=R_X_1|Y(D_1)+R_X_2|Y(D_2) ,
where (<ref>) follows form the conditional independent or the Markov chain, and (<ref>) is obtained from the optimal p^*(x̂_1|x_1,y) and p^*(x̂_2|x_2,y). Since R( D_1, D_2 ) = min I(X_1,X_2;X̂_1,X̂_2|Y), we have R( D_1, D_2 ) ⩽ R_X_1|Y(D_1)+R_X_2|Y(D_2). This completes the proof.
Example 2. We first explore the rate-distortion function of a binary semantic source given side information with Hamming distortion measure. This semantic source consists of three semantic elements (X_1, X_2, Y) whose probabilistic distribution can be modeled by a BN as shown in Fig. <ref>(a). The inter-variable dependence structures (X_1,Y) and (X_2,Y) are doubly symmetric binary distributed with parameters p_1 and p_2 respectively, where
p( x_1,y) = [ [ 1 - p_1/2 p_1/2; p_1/2 1 - p_1/2 ]], p( x_2,y) = [ [ 1 - p_2/2 p_2/2; p_2/2 1 - p_2/2 ]].
By summing the joint probability distribution over all values of x_1 and x_2, we can obtain the marginal distribution p(y). The the conditional distributions p(x_1|y) and p(x_2|y) can be obtained through Bayesian criterion as
p( x_1|y) = [ [ 1 - p_1 p_1; p_1 1 - p_1 ]], p( x_2|y) = [ [ 1 - p_2 p_2; p_2 1 - p_2 ]].
By Lemma 2, we have R( D_1, D_2 ) = R_X_1|Y(D_1) + R_X_2|Y(D_2). Following the conditional rate-distortion function of binary sources in <cit.>, it yields
R_X_1|Y(D_1)=[h_b(p_1)-h_b(D_1) ]_0 ⩽ D_1 ⩽ p_1 ,
R_X_2|Y(D_2)=[h_b(p_2)-h_b(D_2) ]_0 ⩽ D_2 ⩽ p_2 .
Thus,
R( D_1, D_2 ) =[h_b(p_1)-h_b(D_1) ]_0 ⩽ D_1 ⩽ p_1
+[h_b(p_2)-h_b(D_2) ]_0 ⩽ D_2 ⩽ p_2 .
We then consider the conditional rate-distortion function of a Gaussian source whose probabilistic distribution can be modeled by a BN as shown in Fig. <ref>(a). We use the mean-squared-error distortion measure here. p( x_1,y) is two-dimensional Gaussian distribution with parameters m_X_1, m_Y, σ_X_1, σ_Y,r_1 as
p( x_1,y) =1/2 πσ_X_1σ_Y √(1-r_1^2)exp{-1/2 σ_X_1^2 σ_Y^2(1-r_1^2)}
·{(x_1-m_X_1/σ_X_1)^2+(y-m_Y/σ_Y)^2.
. -2 r (x_1-m_X_1)(y-m_Y)/σ_X_1σ_Y}.
The conditional distribution p( x_1|y) is also Gaussian distribution as
p( x_1|y) =(2 πσ_X_1^2(1-r_1^2))^-1 / 2exp{-(2 σ_X_1^2(1-r^2))^-1.
.·[x_1-m_X_1-r_1 σ_X_1/σ_Y(y-m_Y)]^2}.
Therefore, we can obtain the rate-distortion function R_X_1|Y(D_1) according to Shannon's work <cit.>
R_X_1|Y(D_1)=[1/2logσ_X_1^2(1-r_1^2)/D_1]_ 0 ≤ D_1 ≤σ_X_1^2(1-r_1^2).
Similarly, we assume p( x_2,y) also follows two-dimensional Gaussian distribution with parameters m_X_2, m_Y, σ_X_2, σ_Y,r_2, and R_X_2|Y(D_2) is given by
R_X_2|Y(D_2)=[1/2logσ_X_2^2(1-r_2^2)/D_1]_ 0 ≤ D_2 ≤σ_X_2^2(1-r_2^2).
By Lemma 2, we can obtain R( D_1, D_2 )
R( D_1, D_2 ) =[1/2logσ_X_1^2(1-r_1^2)/D_1]_ 0 ≤ D_1 ≤σ_X_1^2(1-r_1^2)
+[1/2logσ_X_2^2(1-r_2^2)/D_1]_ 0 ≤ D_2 ≤σ_X_2^2(1-r_2^2).
Theorem 2. (Semantic Capacity Theorem) For an discrete memoryless channel, the semantic capacity is given by:C = max_P( G ) I( G;G^'). The reliable reconstruction of the transmitted graph G is possible if the transmission rate R satisfies R < C.
Example 5. Consider the transmission of a graph G( V,E), the channel transition probability matrix of the symmetric discrete channel is shown in Fig. <ref>. The semantic capacity is C = 3 - 3H( p ).
Proof: according to the mutual information between the transmitted graph G and received graph G' in (<ref>), we have
C = max_P( G ) I( G;G^')
= max_P( x_1)[ H( x_1^') - H( x_1^' |x_1)] + max_P( x_2|x_1)[ H( x_2^' |x_1^') - H( x_2^' |x_2)] + max_P( x_3|x_1)[ H( x_3^' |x_1^') - H( x_3^' |x_3)]
= max_P( x_1)[ H( x_1^') - ∑_x_1p( x_1)H( p )] + max_P( x_2|x_1)[ H( x_2^' |x_1^') - ∑_x_2p( x_2)H( p )] + max_P( x_3|x_1)[ H( x_3^' |x_1^') - ∑_x_3p( x_3)H( p )]
= max_P( x_1)[ H( x_1^')] + max_P( x_2|x_1)[ H( x_2^' |x_1^')] + max_P( x_3|x_1)[ H( x_3^' |x_1^')] - 3H( p ) .
We note that H( x_1^'), H( x_2^' |x_1^') and H( x_3^' |x_1^') can reach maximum values when P( x_1),P( x_2|x_1)andP( x_3|x_1) are uniform distribution. Hence, we have
C = max_P( G ) I( G;G^') = 3 - 3H( p ).
Example 4. We consider the Hamming distortion measure, define as:
d( g;g^') = {
1g = g^'
0g g^'..
The source graph distribution is
( G
p( G )
) = {g_1
0.5
.. g_2
0.5
},
and the quantized graph set is G^' = {g_1^' ,g_2^'}. The rate-distortion function R( D ) is given by R( D ) = log 2 - H( 1 - D,D).
Proof: according to the mutual information in (<ref>), we have
I( G;G^') = H( G ) - H( G|G^')
= H( G ) + H( G^') - H( G,G^')
,
where the mutual information I( G;G^') is a convex function of p( G,G^') for fixed p( G ). We can find the minima of I( G;G^') subject to equality constraints as
min H( G ) + H( G^') - H( G,G^')
s.t.{∑_i ∑_j p( g_i^' ,g_j)d( g_i^' ,g_j) = D
∑_i ∑_j p( g_i^' ,g_j) = 1
∑_i p( g_i^' ,g_j) = p( g_j)
.
.
We introduce Lagrange multipliers, S, μ and α _j( j = 1,2), for the constrained distortion. Then we need to minimize the new function
L = H( G ) + H( G^') - H( G,G^')
- S( ∑_i ∑_j p( g_i^' ,g_j)d( g_i^' ,g_j) - D)
- μ( ∑_i ∑_j p( g_i^' ,g_j) - 1)
- ∑_α _jα _j( ∑_i p( g_i^' ,g_j) - p( g_j)) .
Taking the derivative w.r.t. p( g_i^' ,g_j), we have
log p( g_i^' ,g_j) - log∑_j p( g_i^' ,g_j) - Sd( g_i^' ,g_j) - μ - α _j = 0 ,
and (<ref>) can be rewritten as
p( g_j|g_i^') = e^Sd( g_i^' ,g_j)e^μ + α _j,
taking the sum w.r.t. g_j, we have
1 = ∑_j e^Sd( g_i^' ,g_j)e^μ + α _j.
Multiplying the both side of (<ref>) by p( g_i^') and taking the sum w.r.t. g_i^', we have
p( g_j) = ∑_i p( g_i^')e^Sd( g_i^' ,g_j)e^μ + α _j .
Substituting the (<ref>) into (<ref>), we have
{e^0e^μ + α _1 + e^Se^μ + α _2 = 1
e^Se^μ + α _1 + e^0e^μ + α _2 = 1
.,
and
e^μ + α _1 = e^μ + α _2 = 1/1 + e^S.
Substituting the (<ref>) into (<ref>), we have
{
p(g_1^' )e^0e^μ + α _1 + p(g_2^' )e^Se^μ + α _2 = 0.5
p(g_1^' )e^Se^μ + α _1 + p(g_2^' )e^0e^μ + α _2 = 0.5
.,
and
p(g_1^' ) = p(g_2^' ) = 0.5 .
According to the constraint ∑_i ∑_j p( g_i^' ,g_j)d( g_i^' ,g_j) = D, we have
D = ∑_i ∑_j p( g_i^')e^Sd( g_i^' ,g_j)e^μ + α _id( g_i^' ,g_j)
= 1/2( 1/1 + e^Se^S + 1/1 + e^Se^S)
= e^S/1 + e^S,
and we can obtain
e^μ + α _1 = e^μ + α _2 = 1/1 + e^S = 1 - D.
According to (<ref>), we have
p(g_1|g_1^' ) = p(g_2|g_2^' ) = 1 - D.
p(g_2|g_1^' ) = p(g_1|g_2^' ) = D.
By substituting the obtained Lagrange multipliers (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), we have
R(D) = 1 - H( 1 - D,D).
§ CONCLUSION
In this paper, we investigated compression of a semantic source which consists a set of correlated semantic elements, the joint probabilistic distribution of which can be modeled by a BN. Then we derived the theoretical limits on lossless compression and lossy compression of this semantic source, as well as the lower and upper bounds on the rate-distortion function. We also investigated the lossy compression problem of the semantic source with side information at both the encoder and decoder. We further proved that the conditional rate distribution function is equivalent to the summation of conditional rate distribution function of each conditionally independent set of variables given the side information. We also derived the conditional rate-distortion functions when the semantic elements of source are binary distribution and multi-dimensional distribution, respectively.
R(D_1,D_2,...,D_m)
= min _c
p(x̂_1,x̂_2,...,x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_m
I(X_1,X_2,...,X_m;X̂_1,X̂_2,..,X̂_m)
⩾min _c
p(x̂_1,x̂_2,...,x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_m∑_i = 1^m I( X_i;X̂_i|X_1,X_2,...,X_i-1)
= min _c
p(x̂_1|x_1)
Ed(x̂_1,x_1) ⩽D_1
I( X_1;X̂_1)
+
min _c
p(x̂_2|x_1,x_2)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
I( X_2;X̂_2|X_1)
+
min _c
p(x̂_3|x_1,x_2,x_3)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
Ed(x̂_3,x_3) ⩽D_3
I( X_3;X̂_3|X_1,X_2) + ...
+
min _c
p(x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_mI( X_m;X̂_m|X_1...X_m-1)
= R_X_1( D_1)+R_X_2|X_1( D_2)+...+R_X_m|X_1,X_2,...,X_m-1( D_m)
= ∑_i = 1^m R_X_i|X_1,X_2,...,X_i-1( D_i).
00
b1 C. E. Shannon and W. Weaver, The Mathematical Theory of Communication.
Urbana, IL: University of Illinois Press, 1949.
b2 I. Tal and A. Vardy, “List Decoding of Polar Codes," IEEE Transactions on information Theory, vol. 61, no. 5, pp. 2213-2226, May 2015.
b3 F. Rusek et al., “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays," IEEE Signal Processing Magazine, vol. 30, no. 1, pp. 40-60, Jan. 2013.
b4 Z. Qin, X. Tao, J. Lu, et al., “Semantic communications: Principles and challenges". arXiv preprint arXiv:2201.01389, 2021.
overview1 X. Luo, H. Chen, et al., “Semantic communications: Overview, open issues, and future research directions," IEEE Wireless Communications, vol. 29, no. 1, pp. 210-219, February 2022.
overview2 W. Yang, H. Du, et al., “Semantic Communications for Future Internet: Fundamentals, Applications, and Challenges," IEEE Communications Surveys & Tutorials, 2022.
overview3 D. Gündüz, Z. Qin, et al., “Beyond transmitting bits: Context, semantics, and task-oriented communications," IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 5-41, Jan. 2023.
b8 H. Xie, Z. Qin, et al., “Deep Learning Enabled Semantic Communication Systems," IEEE Transactions on Signal Processing, vol. 69, pp. 2663-2675, 2021.
overview4 G. Shi, Y. Xiao, et al., “From semantic communication to semantic-aware networking: Model, architecture, and open problems," IEEE Communications Magazine, vol. 59, no. 8, pp. 44-50, August 2021.
overview5 Q. Hu, G.Zhang, et al., “Robust semantic communications against semantic noise," 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall), London, United Kingdom, 2022.
overview6 Q. Zhou, R. Li, et al., “Semantic communication with adaptive universal transformer," IEEE Wireless Communications Letters, vol. 11, no. 3, pp. 453-457, March 2022.
b9.1 E. Bourtsoulatze, D. Burth Kurka and D. Gündüz, “Deep Joint Source-Channel Coding for Wireless Image Transmission," IEEE Transactions on Cognitive Communications and Networking, vol. 5, no. 3, pp. 567-579, Sept. 2019.
image1 J. Kang, H. Du, et al., “Personalized saliency in task-oriented semantic communications: Image transmission and performance analysis," IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 186-201, Jan. 2023.
image2 H. Yoo, T. Jung, et al. “Real-time semantic communications with a vision transformer," 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul, Korea, 2022.
image3 A. Li, X. Liu , et al., “Domain Knowledge Driven Semantic Communication for Image Transmission over Wireless Channels," IEEE Wireless Communications Letters, vol. 12, no. 1, pp. 55-59, Jan. 2023.
image4 Z. Zhang, Q. Yang, et al., “Semantic Communication Approach for Multi-Task Image Transmission," 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall), London, United Kingdom, 2022.
text1 X. Peng, Z. Qin, et al., “A robust deep learning enabled semantic communication system for text," GLOBECOM 2022-2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 2022.
text2 L.Yan, Z. Qin, R. Zhang, Y. Li and G. Y. Li, “Resource allocation for text semantic communications," IEEE Wireless Communications Letters, vol. 11, no. 7, pp. 1394-1398, July 2022.
video1 P. Jiang, C. K. Wen, S. Jin and G. Y. Li, “Wireless semantic communications for video conferencing," IEEE Journal on Selected Areas in Communications, 4vol. 41, no. 1, pp. 230-244, Jan. 2023.
video2 S. Wang, J. Dai, et al., “Wireless deep video semantic transmission," IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 214-229, Jan. 2023.
b9 T. Han, Q. Yang, Z. Shi, et al., “Semantic-aware Speech to Text Transmission with Redundancy Removal," 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul, Korea, 2022.
speech2 Z. Weng, Z. Qin, “Semantic communication systems for speech transmission," IEEE Journal on Selected Areas in Communications, vol. 39, no. 8, pp. 2434-2444, Aug. 2021.
speech3T. Han, Q. Yang, et al. “Semantic-preserved communication system for highly efficient speech transmission," IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 245-259, Jan. 2023.
b5 M. Schuster, K. K. Paliwal, "Bidirectional recurrent neural networks," IEEE Transactions on Signal Processing 45 (11) (1997) 2673-2681.
b6 P. Simard, D. Steinkraus, J. Platt, "Best practices for convolutional neural networks applied to visual document analysis," in: Proceedings. Seventh International Conference on Document Analysis and Recognition, Vol. 3, 2003, pp. 958-963.
b7 S. Hochreiter, J. Schmidhuber, "Long short-term memory," Neural computation 9 (8) (1997) 1735-1780.
b10 D. L. Waltz, An English language question answering system for a large relational database, Communications of the ACM 21 (7) (1978) 526-539.
b11 Q. You, H. Jin, Z. Wang, C. Fang, J. Luo, Image captioning with semantic attention, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4651-4659.
b14 R. Carnap, Y. Bar-Hillel, An outline of a theory of semantic information. 1952.
b16 L. Floridi. “Outline of a theory of strongly semantic information," Minds and machines, 2004,14(2): 197-221.
b15 N. J. Nilsson, “Probabilistic logic," Artificial Intelligence, vol. 28, no. 1, pp. 71–87, 1986.
Bao J. Bao, P. Basu, M. Dean, C. Partridge, A. Swami, W. Leland, and J. A. Hendler, “Towards a theory of semantic communication," IEEE Network Science Workshop, West Point, NY, USA, Jun. 2011.
b12 A. D. Luca, S. Termini, A definition of a non-probabilistic entropy in the setting of fuzzy sets[J]. Information and Control, 1972(20): 301-312.
b13 A. D. Luca, S. Termini, Entropy of L-Fuzzy Sets[J]. Information and Control, 1974(24): 55-73.
Poor J.Liu,W.Zhang and H.V.Poor, “A Rate-Distortion Framework for Characterizing Semantic Information ," 2021 IEEE International Symposium on Information Theory (ISIT), Melbourne, Australia,2021.
Guo T. Guo, Y.Wang , et al., “Semantic Compression with Side Information: A Rate-Distortion Perspective," arXiv preprint arXiv:2208.06094, 2022.
Shao Y. Shao, Q. Cao , and D. Gunduz . "A Theory of Semantic Communication," arXiv preprint arXiv:2212.01485, 2022.
b21 E. W. Dijkstra. A Note on Two Problems in Connection with Graphs. Numerische Mathematics, 1959.
b22 M. Scanagatta, A. Salmerón and F. A. Stella, “survey on Bayesian network structure learning from data," Prog Artif Intell, 8, 425–439, 2019.
b222 F. V. Jensen, T. D. Nielsen, Bayesian networks and decision graphs. New York: Springer, 2007.
b23 F. S. Nurfikri, M. S. Mubarok and Adiwijaya, “News Topic Classification Using Mutual information and Bayesian Network," 2018 6th International Conference on information and Communication Technology (ICoICT), Bandung, Indonesia, 2018.
b24 J. Luo, A. E. Savakis, A. Singhal. A Bayesian network-based framework for semantic image understanding. Pattern Recognition, 2005, 38(6):919-934.
b25 C. Huang, H. Shih and C. Chao, “Semantic analysis of soccer video using dynamic Bayesian network," IEEE Transactions on Multimedia, vol. 8, no. 4, pp. 749-760, Aug. 2006.
b26 T. M. Cover and J. A. Thomas, Elements of information theory. Wiley-Interscience, 2006.
b27 R. Gray, “A new class of lower bounds to information rates of stationary sources via conditional rate-distortion functions," IEEE Transactions on Information Theory, vol. 19, no. 4, pp. 480-489, July 1973.
b28 C. E. Shannon, E. Claude, “Coding theorems for a discrete soruce with a fidelity criterion," Ire Nat.conf.rec (1959):142-163.
b29 A. Wyner and J. Ziv, “Bounds on the rate-distortion function for stationary sources with memory," IEEE Transactions on Information Theory, vol. 17, no. 5, pp. 508-513, September 1971.
§ APPENDIX I
PROOF OF THE LOWER BOUND IN LEMMA 1
In this section, the rigorous technical proof of the lower bound in Lemma 1 is given.
For any test channel p_t(x̂_1,...,x̂_m|x_1,...,x_m) such that Ed(x̂_1,x_1) ⩽D_1,...,Ed(x̂_m,x_m) ⩽D_m, we have
I( X_1,...,X_m;X̂_1,...,X̂_m)
= H( X_1,...,X_m) - H( X_1,...,X_m|X̂_1,...,X̂_m)
= ∑_i = 1^m H( X_i|Parent(X_i))
- ∑_i = 1^m H( X_i|X̂_1,...,X̂_m,Parent(X_i))
⩾∑_i = 1^m H( X_i|Parent(X_i)) - ∑_i = 1^m H( X_i|X̂_i,Parent(X_i))
= ∑_i = 1^m I( X_i;X̂_i|Parent(X_i))
⩾∑_i = 1^m R_X_i|Parent(X_i)( D_i)
where (<ref>) is obtained by chain rule and the propriety of BN, and (<ref>) follows from the fact that conditioning reduces the entropy. The the lower bound can be obtained by taking infimum of I( X_1,...,X_m;X̂_1,...,X̂_m ) over the proper set of p_t(x̂_1,...,x̂_m|x_1,...,x_m). (<ref>) can be derived by the definition of R_X_i|Parent(X_i)( D_i) given in (<ref>).
Then, we have
R(D_1,D_2,...,D_m)
= min _c
p(x̂_1,x̂_2,...,x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_m
I(X_1,X_2,...,X_m;X̂_1,X̂_2,..,X̂_m)
⩾min _c
p(x̂_1,x̂_2,...,x̂_m|x_1,x_2,...,x_m)
Ed(x̂_1,x_1) ⩽D_1
Ed(x̂_2,x_2) ⩽D_2
...
Ed(x̂_m,x_m) ⩽D_m( ∑_i = 1^m I( X_i;X̂_i|Parent(X_i)))
= ∑_i = 1^m R_X_i|Parent(X_i)( D_i).
§ APPENDIX II
PROOF OF LEMMA 2
In this section, the rigorous proof of Lemma 2 is given.
Since the subsets 𝒱_1,...𝒱_l are conditional independent given Y according to the property of BN, we have
H( 𝒱_l| 𝒱_l-1,...,𝒱_1, Y ) = H(𝒱_l| Y),
H( 𝒱_1,...,𝒱_l | Y ) = ∑_i = 1^l H(𝒱_i| Y).
We can bound I(𝒱_1,...,𝒱_l;𝒱̂_1,...,𝒱̂_l|Y) by
I(𝒱_1,...,𝒱_l;𝒱̂_1,...,𝒱̂_l|Y)
= H(𝒱_1,...,𝒱_l|Y)-H(𝒱_1,...,𝒱_l|𝒱̂_1,...,𝒱̂_l,Y)
= ∑_i = 1^l H(𝒱_i|Y) - ∑_i = 1^l H(𝒱_i|𝒱̂_1,...,𝒱̂_l,𝒱_1,...,𝒱_i-1,Y)
⩾∑_i = 1^l H(𝒱_i|Y) - ∑_i = 1^l H(𝒱_i|𝒱̂_i,Y)
= ∑_i = 1^l I(V_i;𝒱̂_i|Y) ,
where (<ref>) follows from the conditional independence property of BN as shown in (<ref>) and the chain rule, and (<ref>) is obtain by the fact that conditioning reduces entropy. Then we have
R_X_1,...,X_m|Y(D_1,...,D_m)
= min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m
I(𝒱_1,...,𝒱_l;𝒱̂_1,...,𝒱̂_l|Y)
⩾min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m∑_i = 1^l I(𝒱_i;𝒱̂_i|Y)
= ∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i).
We now prove the achievability of the rate-distortion tuple (∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i), D_1,...,D_m). Let p^*(v̂_1|v_1,y),...,p^*(v̂_l|v_l,y) be the separate codes that achieve the rate-distortion R_𝒱_1|Y(D_j, j∈𝒱_i),..., R_𝒱_l|Y(D_j, j∈𝒱_l). We can have a joint rate distortion code such that p(v̂_1,...,v̂_l|v_1,...,v_l,y)=∑_i = 1^l p^*(v̂_i|v_i,y) such that (𝒱_1,𝒱̂_1),..., (𝒱_l,𝒱̂_l) are conditional independent given Y. Then, the rate of this code is given by
I(𝒱_1,...,𝒱_l;𝒱̂_1,...,𝒱̂_l|Y) = ∑_i = 1^l I(𝒱_i;𝒱̂_i|Y)
= ∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i)
= min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m
I(𝒱_1,...,𝒱_l;𝒱̂_1,...,𝒱̂_l|Y)
= min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m∑_i = 1^l I(𝒱_i;𝒱̂_i|Y)
⩽min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i)
⩽∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i).
I(𝒱_1,...,𝒱_l;𝒱̂_1,...,𝒱̂_l|Y)
= H(𝒱_1,...,𝒱_l|Y)-H(𝒱_1,...,𝒱_l|𝒱̂_1,...,𝒱̂_l,Y)
= ∑_i = 1^l H(𝒱_i|Y) - ∑_i = 1^l H(𝒱_i|𝒱̂_1,...,𝒱̂_l,𝒱_1,...,𝒱_i-1,Y))
= ∑_i = 1^l H(𝒱_i|Y) - ∑_i = 1^l H(𝒱_i|𝒱̂_i,Y)
= ∑_i = 1^l I(𝒱_i;𝒱̂_i|Y)
= ∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i),
where (<ref>) follows from the conditional independence assumption and the property of BN in (<ref>), and (<ref>) is obtained from the optimal p^*(v̂_1|v_1,y),..., p^*(v̂_l|v_l,y). Then, we have
R_X_1,...,X_m|Y(D_1,...,D_m)
= min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m
I(𝒱_1,...,𝒱_l;𝒱̂_1,...,𝒱̂_l|Y)
= min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m∑_i = 1^l I(𝒱_i;𝒱̂_i|Y)
= min _c
p(x̂_1,...,x̂_m|x_1,...,x_m,y)
Ed(x̂_1,x_1) ⩽D_1
...
Ed(x̂_m,x_m) ⩽D_m∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i)
⩽∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i).
This proves the achievability of (∑_i = 1^l R_𝒱_i|Y( D_j, j∈𝒱_i), D_1,...,D_m), and together with (<ref>) completes the proof of Lemma 2.
nR
⩾∑_i = 1^n I( X_1,i,...,X_m,i,Y_i;(X̂_1,...,X̂_m)^n|X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n I( Y_i;(X̂_1,...,X̂_m)^n|Y^i - 1)
=∑_i = 1^n I( X_1,i,...,X_m,i,Y_i;(X̂_1,...,X̂_m)^n,X_1^i - 1,...,X_m^i - 1,Y^i - 1)
- ∑_i = 1^n I( Y_i;(X̂_1,...,X̂_m)^n,Y^i - 1)
= ∑_i = 1^n I( X_1,i,...,X_m,i;X̂_1,i^,...,X̂_m,i^|Y_i)
+ ∑_i = 1^n I( X_1,i,...,X_m,i,Y_i;X̂_1^i - 1,...,X̂_m^i - 1,|(X̂_1,...,X̂_m)^n,Y_1^i-1)
+ ∑_i = 1^n I( X_1,i,...,X_m,i;X̂_1^i - 1,X̂_1,i + 1^n,...,X̂_m^i - 1,X̂_m,i + 1^n,.
.Y^i - 1|X̂_1,i^,...,X̂_m,i^,Y_i)
⩾∑_i = 1^n I( X_1,i,...,X_m,i;X̂_1,i^,...,X̂_m,i^|Y_i)
⩾∑_i = 1^n R( Ed_1( X_1,i,X̂_1,i^),...,Ed_m( X_m,i,X̂_m,i^))
⩾ nR( Ed_1( X_1^n,X̂_1^n),...,Ed_m( X_m,i,X̂_m^n))
⩾ nR( D_1,...,D_m),
|
http://arxiv.org/abs/2306.10630v1
|
20230618194703
|
Collective States in 2D Molecular Monolayers
|
[
"Sabrina Juergensen",
"Moritz Kessens",
"Charlotte Berrezueta-Palacios",
"Nikolai Severin",
"Sumaya Ifland",
"Jürgen P. Rabe",
"Niclas S. Mueller",
"Stephanie Reich"
] |
physics.atm-clus
|
[
"physics.atm-clus"
] |
Collective excited states form in organic two-dimensional monolayers through the Coulomb coupling of the molecular transition dipole moments. They manifest as characteristic strong and narrow peaks in the excitation and emission spectra that are shifted to lower energies compared to the monomer transition. We study experimentally and theoretically how robust the collective states are against homogeneous and inhomogeneous broadening as well as spatial disorder that occurs in real molecular monolayers. Using a microscopic model for a two-dimensional dipole lattice in real space we calculate the properties of collective states and their extinction spectra. We find that the collective states persist even for 1-10% random variation in the molecular position and in the transition frequency; with similar peak position and integrated intensity as for the perfectly ordered system. We measure the optical response of a monolayer of the perylene-derivative MePTCDI on two-dimensional materials. On the wide band-gap insulator hexagonal boron nitride it shows strong emission from the collective state with a line width that was dominated by the inhomogeneous broadening of the molecular state. When using the semimetal graphene as a substrate, however, the luminescence is completely quenched. By combining optical absorption, luminescence, and multi-wavelength Raman scattering we verify that the MePTCDI molecules form very similar collective monolayer states on hexagonal boron nitride and graphene substrates, but on graphene the line width is dominated by non-radiative excitation transfer from the molecules to the substrate. Our study highlights the transition from the localized molecular state of the monomer to a delocalized collective state in the 2D molecular lattice that is entirely based on Coulomb coupling between optically active excitations, which can be excitons, vibrations or other transition dipoles. The outstanding properties of organic monolayers make them promising candidates for components of soft-matter optoelectronic devices.
§ INTRODUCTION
Organic two-dimensional (2D) materials have emerged as intriguing materials with the potential to replace conventional semiconductors and have become a wide research focus over the last years.<cit.> Non-covalently bound organic 2D monolayers mostly have been grown out of planar molecules such as derivatives of the organic dye molecule perylene that build highly ordered lattices guided by long- and short-range molecule-molecule interactions.<cit.> The coupling between the transition dipoles of many molecules leads to collective states with short life times, high emission rates, and narrow line widths.<cit.> Of particular interest are the optically active excitonic transitions in such 2D molecular lattices that occur in the visible and near ultra-violet energy range. Their collective excitation is red-shifted compared to the monomer transition with a vanishing Stokes shift between the excitation and emission energy.<cit.> The bright light emission may be exploited in optoelectronic devices that will be tunable by molecular synthesis as well as by changes in the monolayer environment.<cit.>
The Coulomb coupling of molecular transition dipoles is an interesting model to study the transition from localized molecular excitations to collective 2D states that are delocalized in space. Collective molecular excitations resemble the formation of collective plasmons in 2D monolayers of metal nanoparticles and 2D arrays of atoms in optical lattices.<cit.> Molecular monolayers strongly enlarge the parameter space for such artificial 2D systems: The lattice constants of molecular monolayers are on the order of 1 nm, which is much smaller than for plasmonic (10 nm) and optical lattices (100-1000 nm) allowing a translational periodicity that is two orders of magnitude smaller than the wavelength of light. The effects of energetic disorder and of the lattice environment are typically neglected in plasmonic and atomic 2D lattices, because the structures are either very precisely controlled and isolated or are large enough to be less sensitive to imperfections.<cit.> This situation is very different for molecular lattices. Molecular transitions show strong inhomogeneous broadening (> 10% of their transition frequency) resulting in lattices that are composed of different transition dipoles. Molecular self organization during growth is mainly driven by weak intermolecular forces and by the interaction with the substrate making molecular 2D lattices potentially more prone to disorder.<cit.> The presence of the substrate screens dipole coupling and may quench excited molecular states providing a strong non-radiative decay channel.<cit.> The question arises how robust collective states are against spatial and energetic disorder and how quenching affects its formation, energetic position, and line width.
Here, we study collective molecular states in organic 2D lattices of perylene-derivatives on 2D materials and their dependence on disorder. Using a microscopic theory of collective dipoles we show how the energetic position of the collective eigenstate depends on the number of interacting molecules, their properties, and packing density. The state is robust against disorder in position and transition frequencies that lead to a broadening of the collective state but little or no shift in its peak position. We realize 2D molecular lattices by growing N,N'-Dimethyl-3,4,9,10-Perylentetracarboxylicdiimide (MePTCDI) on multi-layer hexagonal boron nitride (hBN) and graphene where it forms a square 2D lattice. On hBN, the collective MePTCDI state shows a red shift of 60 meV and 60% reduction in the full width at half maximum (FWHM) compared to the monomer transitions, which agrees with the microscopic description. We show that the collective state is also present on graphene as a conductive material although light emission is quenched by five orders of magnitude, but it manifests in optical absorption and has a resonance in the Raman response.
§ RESULTS AND DISCUSSION
§.§ Theory of collective molecular excitations
Collective states in molecular dimers and small aggregates are a well-known phenomenon for molecules in solution, where they arise from stacking and alignment of molecular transition dipoles.<cit.> In so-called J-aggregates the molecules are in a head-to-tail configuration, whereas in H-aggregates the transition dipoles are arranged side-by-side.<cit.> A magic angle of 54.7° between the dipole moments of two molecules represents the transition between the red-shift that is typical for J- and the blue-shift characteristic for H-aggregates.<cit.> In this model, J-aggregates show the characteristic optical properties of a superradiant state that has a larger transition dipole moment than the monomer.<cit.> The concept of J- and H- aggregates works well for dimers, small agglomerates, and one-dimensional molecular chains but fails to describe 2D molecular monolayers, since 2D lattices necessarily combine head-to-tail and side-by-side arrangements, see Fig. <ref>a for a sketch considering the nearest neighbors of a molecule. Although there are extensions to the intermediate I-aggregate in 2D, such models typically consider other types of interactions in addition to dipole-dipole coupling.<cit.>
To model collective molecular states in a 2D lattices we consider a finite 2D arrangement of molecules that interact through their transition dipole moments.<cit.> Each molecular transition is represented by a point dipole d=α E_0 induced by the field E_0 with the polarizability
α = d_ge^2/ħ(ω_0 - ω - iγ_0),
where ω is the driving frequency, ω_0 the molecular transition frequency, 2 γ_0 its spectral broadening, and d_ge the transition dipole moment. Each individual dipole interacts via its electric field with all other transition dipoles. The interaction changes the dipole moment at each lattice site and gives rise to collective dipolar eigenmodes m_p with polarizability α_p, see Methods for details. The individual polarizability α is replaced by the collective lattice polarizability <cit.>
α_coll=∑_pα_p=∑_p d_ge^2/ħ(ω_0-ω+Δ_p)-iħ(γ_0+γ_p).
The collective transitions are shifted in frequency by Δ_p=d_ge^2Re(g_p) compared to ω_0, where g_p is the complex eigenvector of the Green's function describing the near- and far-field dipole-dipole coupling, see Methods. In addition, the decay constant increased by γ_p=d_ge^2Im(g_p), which is the characteristic increase in the emission rate observed in molecular aggregates. The collective response of the molecular lattice can be measured experimentally by the extinction and compared to a calculated extinction coefficient that depends on the sum over α_p, see Methods for details.
In Fig. <ref>b we show the shift in the energy of selected collective states in an N× N square lattice. The absolute magnitude of the shift depends on the collective eigenvector, the individual transition dipoles, the packing density, lattice size and type, and dielectric screening. The eigenenergies shift with N but for most modes saturate for 100 dipoles in the finite lattice (N=10), Fig. <ref>b. The eigenmode with the strongest dipole moment, eigenvector I in Fig. <ref>c, has all dipoles oriented parallel resulting in a maximum shift of 200 meV or 10% of the transition frequency. The mode with the second strongest dipole moment corresponds to an anti-bonding configuration with three stripes of dipoles that are aligned side-by-side (vector II). For small lattices it results in a blue shift of the collective state, which converges towards zero shift in larger lattices. Interestingly, eigenvector III with a corresponding pattern varying along the y axis has a red shift of ∼ 300meV, Fig. <ref>b, i.e., higher than for the perfectly parallel dipoles in eigenvector I. When all dipoles oscillate in-phase along x the coupling is attractive for dipoles along the oscillation direction but repulsive perpendicular to it, Fig. <ref>a. Modes with anti-parallel stripes of dipoles, therefore, have a stronger binding contribution and a larger frequency shift. The mode with the highest energy shift, eigenvector IV, has single lines with alternating polarization direction resulting in the configuration with the strongest bonding character along the lines and perpendicular to them.
Absorption and emission of the perfect 2D dipole lattice is dominated by a few bright modes, while the majority of the eigenvectors are dipole forbidden.<cit.> Eigenvector I has a dipole moment of 94 D, which increases its radiative decay γ_p by a factor of 88 compared to the individual dipole (N=10). For mode II and III the increase amounts to a factor of five and two, respectively. The increase in the radiative decay by approximately Γ_rad(ω) ∝ N^2 is known as the Dicke superradiance <cit.> that arises from the collective emission of many molecules.
While the rate of spontaneous emission increases for specific modes in Fig. <ref>c, the overall integrated intensity remains the same for arrays of coupled and uncoupled dipoles. The absorption or extinction spectrum of the 2D lattice is governed by the total response of all collective eigenstates resulting in a single dominant peak at the energy of the collective mode with the strongest dipole moment, Fig. <ref>d. The integrated intensity of the extinction peak follows σ_ext∝ 0.9 N^2. This means that the extinction spectrum of a 2D lattice is of similar intensity for coupled and uncoupled transition dipoles, because individual dipole intensities have an overall σ_ext∝ N^2 through the sum of all uncoupled contributions. Nevertheless, the peak position is a clear fingerprint for a collective state.
The formation of the collective state, its energetic position, and even the intensity of the extinction spectrum are surprisingly robust against spatial disorder and inhomogeneous broadening of the molecular transition, which we study in our real space simulations, Fig. <ref>. We first consider spatial disorder that may arise from variations of the dipole positions so that the dipole monolayer deviates from the perfect 2D lattice. We model this by varying the dipole positions 𝐫_i + δ𝐫_i in the xy plane around the sites 𝐫_i of the perfect lattice, where δ𝐫_i is obtained through random sampling from a multivariate Gaussian distribution. To account for the large sample area that is typically probed in experiments we averaged the extinction spectrum over several random lattices, Fig. <ref>a.
Disorder increases the width and decreases the maximum intensity of the extinction spectrum, but the peak area remains within 95% of the original intensity for a standard deviation of σ=10%, inset in Fig. <ref>a. Collective states continue to form despite the spatial disorder, but the delocalized eigenvectors of the perfect lattice become more localized with disorder, right panels in Fig. <ref>a. The frequency shift of the collective excitation increases slightly in the disordered lattice, Fig. <ref>a, because the dipole-dipole interaction scales with 1/|𝐫_i - 𝐫_j |^3.<cit.> A (random) decrease in the dipole distance causes a larger red shift than the blue shift induced by the corresponding increase in dipole distance.
Another source of disorder is inhomogeneous broadening or fluctuations in the transition frequency from one dipole to the next. For most dye molecules the luminescence and absorption linewidths at room temperature (≈ 10 - 100 meV) are dominated by inhomogeneous broadening compared to the much smaller radiative decay constants (γ_rad≈ 10^-8-10^-6 eV). At first sight, this appears to be a more serious distortion for the formation of a collective state that depends on the Coulomb coupling of transition dipoles (or the absorption and emission of virtual photons). However, the collective state remains present despite random fluctuations in excitation frequencies, see Fig. <ref>b and Methods for details on the simulations. While the dominant collective mode continues to be delocalized in the lattice, it is formed by a subset of dipoles, right panels in Fig. <ref>b. As for spatial disorder, inhomogeneous broadening has little effects on the integrated intensity (90% intensity for 4% disorder). Although the width of the collective extinction peak increases with the inhomogeneously broadened dipoles, its line width is smaller than expected from the variations in the individual peak positions. We fit the spectra in Fig. <ref>b with a single Lorentzian and determined the inhomogeneous contribution to the FWHM. The inhomogeneous contribution in the 10×10 dipoles lattice was only 30% of the inhomogeneous broadening for the individual dipoles, because the collective eigenvectors combine dipoles of similar frequency in the formation of the collective state, see right panels in Fig. <ref>b.
Finally, we show the effect of an additional non-radiative decay channel or homogeneous broadening by varying γ_0 in Fig. <ref>c. This additional contribution affects all individual dipoles in exactly the same way and leads to an incerease in the FWHM of the collective eigenmodes. The cooperative frequency shift Δ_p = d_ge^2 Re(g_p), on the other hand, depends only on the transition dipole d_ge and the geometry of the lattice via Re(g_p). It is independent of γ_0 as is confirmed by the constant peak position in Fig. <ref>c.
To summarize, we simulated a 2D molecular monolayer by a lattice of interacting transition dipoles in real space. The interaction between the molecules leads to collective eigenstates that give rise to a strong red shift of the predicted excitation frequencies. This process is robust against spatial and energetic disorder, because of the strong dipole-dipole coupling in the tightly packed molecular layers. We also find that an increase in non-radiative decay will not affect the collective frequency that for a perfect lattice only depends on the individual transition frequency, the strength of the transition dipole, the 2D lattice type, its lattice constant and the screening by the environment. We now realize the proposed lattices experimentally to study the collective states by optical spectroscopy.
§.§ Growth and structure of MePTCDI monolayers
Monolayers of MePTCDI were grown on multi-layers of hBN as an insulating and graphene as a conductive substrate, see Methods. MePTCDI is a planar dye molecule that belongs to the perylene family with a conjugated π-system, see molecular structure in Fig. <ref>d. It was previously shown to form micron-sized monolayers on atomically flat hBN.<cit.> We initially characterize the MePTCDI structure on hBN with fluorescence microscopy. Figure <ref>a shows an hBN flake with MePTCDI molecules on top that were deposited by physical vapor deposition, see Methods. The green luminescence is emitted from an MePTCDI monolayer that arranged non-covalently on the hBN substrate.<cit.> It has an emission maximum at 2.25 eV (551 nm) as we will discuss in detail in Sect. <ref>. The monolayer areas dominate the sample in Fig. <ref>a. They account for 84% of the hBN area. The red areas (16%) are molecular aggregates where the molecules stacked in 3D structures and interact via π-π coupling in addition to the Coulomb interaction.<cit.> The agglomerate fluorescence spectrum peaks at 1.75 eV (709 nm), Supplementary Fig. <ref>, resulting in the red appearance in the fluorescence microscope image.
We determined the structure of the MePTCDI monolayers using high-resolution AFM, see Fig. <ref>b. The image shows an almost square lattice with lattice constants a=b=(11.8±0.3) Å and an angle ∠𝐚,𝐛=(84±2)^∘ as determined from a fast Fourier transform (FFT) of the AFM image, see inset. Figure <ref>d sketches the obtained monolayer structure that corresponds to the so-called brick stone lattice.<cit.> It results from the orientation of the transient molecular dipole moments plus the repulsion of the positively charged oxygen atoms. Dipole-dipole coupling aligns the molecules in a line along their long axis. The next row of molecular is placed in parallel but shifted for maximum distance between the oxygens on neighboring MePTCDI molecules. Multilayers of MePTCDI form a herringbone structure,<cit.> but the brick stone lattice in Fig. <ref>b has denser packing and; are therefore, favored in the monolayer. The MePTCDI monolayer structure on multi-layer graphene, Fig. <ref>c, is identical to the one on hBN, Fig. <ref>b. This shows that the monolayer structure is determined by intermolecular interactions via Coulomb coupling and oxygen repulsion; the two-dimensional crystals only ensure the flat arrangement. Due to their structural similarity, the two MePTCDI monolayers are excellent candidates to study how the interaction with the substrates affects the molecular transitions and the formation of collective states.
§.§ Collective MePTCDI Exciton
Light absorption and emission from the MePTCDI monolayer, Fig. <ref>, has a strong peak at 2.25 eV, which originates from collective molecular states. It is shifted by 60 meV to smaller energies compared to the monomer (2.31 eV), Fig. <ref>a. This is in good agreement with Zhao et al.<cit.>. The emission is polarized, inset in Fig. <ref>a, which confirms optically that the molecules form a highly orientated lattice. In addition to the red shift of the emission, we find a vanishing Stokes shift in the monolayer absorption and emission, Fig. <ref>b. The Stokes shift of MePTCDI is already quite small in solution (60 meV), but the lower flanks of the absorption and emission peak are identical in the monolayer.<cit.> Other signatures of the collective state are the increase in the dominant zero-phonon-line and the much narrower line width of the monolayer emission compared to the molecules in solution, Table <ref>.<cit.>
We calculated the expected frequency shift for a 2D lattice of MePTCDI monomers with an excitation ħω_0=2.31eV, ħγ_0=5meV, d_ge=8.8 D, and ϵ_m=2.7, Fig. <ref>c. The dipoles were oriented at 45^∘ degree in a square lattice as dictated by the lattice structure, Fig. <ref>d. We obtain a calculated red shift of 40 meV for the collective MePTCDI state, which is smaller than the experimental shift. However, the simulation has a number of uncertainties: First, the molecules are large compared to their distance, which means that the point dipole approximation is not strictly valid for this configuration. Second, ħω_0 was measured in solution and might actually differ on a solid substrate. Also, we assumed the screening by the hBN substrate to yield an effective background dielectric constant of n=1.64, which corresponds to half space filling by the substrate and may overestimate the screening within the layer, see Supplementary Information.
The emission spectrum of the MePTCDI monolayer has a narrow line width (FWHM = 36 meV) with a slightly larger width (45 meV) in absorption. The absorption is broader because all states with finite dipole moment contribute to excitation, but light emission occurs predominantly from the lowest-lying optically active states. Despite its narrow appearance, the line width of the collective MePTCDI state remains dominated by inhomogeneous broadening with little contribution from spatial disorder. The FWHM of the monolayer amounts to 40% of the molecular transition, Table <ref>, in reasonable agreement with the predicted narrowing (30%, Fig. <ref>b). The dominance of inhomogeneous broadening is also confirmed by time-resolved measurements that reported a lifetime of 30 ps and nearly 100% quantum yield for the monolayer,<cit.> which yields a lifetime limited FWHM ≈ 30 μeV, i.e., three orders of magnitude below the observed width.
On graphene the characteristic MePTCDI monolayer emission vanishes. Instead we observe the Raman spectrum of the monolayer, Fig. <ref>a. The peaks at 1309 cm^-1 and 1392 cm^-1 correspond to ring stretch modes of the perylene core while the mode at 1588 cm^-1 belongs to C=C stretching of the carbon rings.<cit.> We attribute the quenching of the luminescence to a Förster resonant energy transfer that is very efficient, because of the small distance (0.3nm) between a planar dye and graphene, the parallel alignment of the transition dipoles, and the broadband absorption of multi-layer graphene in the visible and near IR. Experimentally, the intensity loss is at least five orders of magnitude or γ_0/γ_m→ G≈10^-5, where γ_m→ G is the rate of excitation transfer from the molecule into graphene and γ_0 the intrinsic molecular decay rate.
To verify that the collective state of MePTCDI exists in the presence of graphene as a strong quenching agent, we measure the excitonic transition by absorption and resonant Raman scattering, Fig. <ref>. The MePTCDI absorption on graphene has a main peak at essentially the same energy as on hBN but twice its line width, see Table <ref> and Fig. <ref>b. The small shift in collective frequency between hBN and graphene is explained by the stronger screening. The increase in FWHM implies an additional broadening by a non-radiative channel with γ_nr≈ 42meV. Assuming γ_nr = γ_m → G and a lifetime of the MePTCDI transition on the order of 1 ns (Refs. <cit.>) we estimate a relative transfer γ_0/γ_m→ G≈ 10^-5, which agrees with the intensity loss of the MePTCDI monolayer on graphene.
In resonant Raman scattering we determined the integrated intensity of the ring stretch mode at 1309 cm^-1 as a function of excitation energy, Fig. <ref>c. The profile has a maximum at 2.26 eV with a FWHM of 140 meV that corresponds to the incoming resonance with the collective MePTCDI state. We did not observe an outgoing resonance in the profile expected at the incoming resonance plus the phonon frequency.<cit.> Nevertheless, the Raman resonance clearly demonstrates that the collective dipole state exists for MePTCDI on graphene, although its emission is suppressed. Independent of the dielectric environment and excitation transfer, the 2D molecular monolayer form a collective state on flat surfaces. This remains true allthough in our system the broadening exceeded the coupling-induced red shift of the molecular excitonic transition of the MePTCDI monolayer.
§ CONCLUSION
In conclusion we studied the transition of a localized excitation in a molecular monomer to a delocalized collective state in a molecular monolayer considering homogeneous and inhomogeneous broadening and spatial disorder. We presented a model based on the point-dipole approximation to calculate the eigenstates of 2D lattices and their extinction spectra. The interaction between the transition dipoles leads to collective states in the lattice and a red-shifted extinction spectrum. The shift depends on the radiative lifetime of the monomer (or the transition dipole) and the packing density of the molecular lattice. We found that spatial and energetic disorder as well as homogenenous non-radiative broadening lead to an increase in the linewidth of the collective state but hardly affect its transition frequency. Due to the strong dipole interaction and the small distance of molecules in a typical 2D lattice, the collective state is very robust. To study the predicted behavior experimentally, we grew monolayers of MePTCDI on hBN and graphene substrates. We observed a brickstone lattice in the monolayer on 2D materials with a single molecule in the 2D unit cell. The collective monolayer state was shifted by 60 meV compared to the monomer transition which occurred on both hBN and graphene. On hBN we observed strong luminescence with characteristic signatures of superradiance like a narrowing of the transition and a vanishing Stokes shift. On graphene, the energy transfer from the MePTCDI molecules to the substrate led to an additional homogeneous broadening of 85 meV, but the collective state remained at the same energy. Our study shows that collective states in 2D molecular lattices are robust against disorder that result in additional homogeneous and inhomogeneous broadening. This paves the way for superradiant devices using soft materials with scalable fabrication techniques and solution-based processing.
§ METHODS
§.§ Microscopic Model of Collective Dipoles
We describe the excitons of each molecule as point dipoles d = α E_0 that are excited by an external electric field E_0.
α = d_ge^2/ħ(ω_0 - ω - iγ_0)
is the polarizability of each individual dipole, with ω the driving frequency, ω_0 the exciton frequency, 2 γ_0 its spectral broadening, and d_ge the transition dipole moment.
In a 2D lattice, the individual dipole moments are modified by the coupling to the electric fields of other dipoles (Fig. <ref>a), which changes the dipole moment at lattice site i to
d_i = α E_0(𝐫_i) + α∑_i ≠ jG_ij d_j,
where G_ij≡G(𝐫_ij) is a Green function that accounts for near-field and far-field coupling, with
G(𝐫_ij)𝐝_j = k^3/4πϵ_0ϵ_me^ik r_ij[( 1/k r_ij + i/(k r_ij)^2 - 1/(k r_ij)^3)𝐝_j
- ( 1/k r_ij + 3i/(k r_ij)^2 - 3/(k r_ij)^3) (𝐫̂_ij·𝐝_j)𝐫̂_ij].
Here 𝐫_ij = 𝐫_i - 𝐫_j, r_ij=|𝐫_ij|, and ϵ_m the dielectric screening by the substrate.
We assume that all dipoles oscillate along the same axis and therefore omit vector notation. Following Ref. <cit.> we write Eq. (<ref>) as 𝐄_0 = M𝐝, with a coupling matrix M that has the general form M_ij = δ_ijα^-1 - (1-δ_ij)G_ij. The entries of the vectors 𝐄_0 and d stand for each lattice site. To understand the collective behavior, we use an eigenmode expansion
M𝐦_p = μ_p 𝐦_p,
with the complex eigenvectors 𝐦_p and eigenvalues μ_p of M. The collective polarizability
α_coll = ∑_p α_p = ∑_p 1/μ_p= ∑_p d_ge^2/ħ (ω_0 - ω + Δ_p) - iħ (γ_0 + γ_p)
is given by the polarizability α_p of each mode. The dipole-dipole interaction leads to a frequency shift Δ_p = d_ge^2 Re(g_p) and broadening γ_p = d_ge^2 Im(g_p) of each mode, where g_p are the complex eigenvalues of G. The response of the dipole lattice to an external electric field is described by the collective dipole moment
𝐝 = α_coll𝐄_0 = ∑_p b_p α_p 𝐦_p,
with expansion coefficients b_p defined by a vector decomposition of the electric field 𝐄_0 = ∑_p b_p 𝐦_p. The collective response can be measured experimentally by the extinction of the dipole lattice
σ_ext = k/ϵ_0 E_0^2Im(𝐄_0^†·𝐝) ≈k/ϵ_0 E_0^2∑_p | b_p |^2 Im(α_p).
For the simulations of the perfect lattice we assumed dipoles with ħω_0=2eV, ħγ_0=5meV, and d_ge=10D. The dipole polarization was along the x axis (horizontal in the eigenvector plots of Fig. <ref>. They were placed in an N× N square lattice with a lattice constant a=1nm. We calculated the eigenenergies, extinction spectra, and eigenvectors for N=1-10. The simulations for the experimental MePTCDI monolayer used ħω_0=2.31eV and d_ge=8.8D as measured in solution. The broadening parameter was set to ħγ_0=5meV. We simulated a 20×20 lattice with a=1.2nm and the dipoles oriented at 45^∘ to the lattice vectors. To describe the dielectric screening by the hBN and graphene substrates we averaged the dielectric contribution by vacuum on top (ϵ_m=1) and by a half space filled with the van der Waals material underneath the MePTCDI layer. With a direction-averaged dielectric constant of ϵ_hBN=4.3 we obtain a background dielectric constant ϵ_m(hBN)=2.7 and ϵ_G=5.03 and ϵ_m(G)=3.0 for graphene.<cit.> This approach is an upper bound for the screening by the environment. A smaller screening would increase the predicted frequency shift due to the formation of the collective state.
For the lattices with inhomogeneously broadened dipoles we modeled the eigenstates and the extinction spectra for 2D lattices in which the individual dipole frequencies vary randomly ω_0 + δω_i; δω_i is obtained through random sampling from a Gaussian distribution with standard deviation σ. As for spatial disorder we report an average over many simulation runs. Average spectra are calculated from 5, 10, 15, and 20 random lattices where the frequencies of individual dipoles vary by σ = 1%, 2%, 3%, and 4% with respect to ω_0 = 2 eV.
Spatial disorder was simulated by varying the dipole positions 𝐫_i + δ𝐫_i in the xy plane around the sites 𝐫_i of the perfect lattice, where δ𝐫_i is obtained through random sampling from a multivariate Gaussian distribution and again averaged over several hundred runs. We performed similar simulations for variations in the dipole orientation (data not shown) with very similar results as for spatial disorder.
§.§ Exfoliation & Physical Vapor Deposition
As substrates we used the freshly cleaved van der Waals materials hBN and graphite. They were exfoliated by the standard scotch tape approach onto a quartz substrate, resulting in few- and multi-layers with up to a few micrometer thickness. The out-baking process and the monolayer growth were performed in a tube furnace from Heraeus. After baking out the substrate with the 2D material at 400^∘C for an hour to remove dirt and water residues on the substrate surface, the substrate with the van der Waals material was placed at the end of the furnace where the furnace temperature starts to decrease and the N,N'-Dimethyl-3,4,9,10-Perylentetracarboxylicdiimide (MePTCDI) molecules from TCI were placed in an evaporation boat in the middle of the furnace. Argon flow (100 sccm) and vacuum (50 mbar) ensure that the molecules are transported to the growth substrate. The monolayer growth process took 90 min.
§.§ Fluorescence & Raman measurements
The fluorescence microscopy images are recorded with a Nikon Eclipse LV100 microscope. The microscope is equipped with a Nikon DS-Ri2 camera, a Thorlabs bandpass filter (FLH532-4) and a Thorlabs longpass filter (FELH0550) to filter out the desired wavelengths.
For the detection of the fluorescence and Raman spectra an XploRA (Horiba) Raman spectrometer was used. All fluorescence measurements were performed at 532 nm laser excitation with 25 μ W laser power and an integration time of 0.1 s. The resonant Raman profile was recorded by tuning the excitation wavelength in steps of 10 nm. Therefore we used an Ar-Kr ion-laser (Innova - Coherent) and a continuous ring laser from Radiant Dyes that can be operated with different fluorescent dyes (R6G and DCM) as lasing medium. A 100x objective (NA = 0.9) focused the laser beam on the sample. The Raman scattered light was detected in backscattering configuration by a Jobin-Yvon T64000 spectrometer in single mode (direct path) configuration. To detect the Raman scattered light the spectrometer is equipped with an Andor iDus CCD camera. All Raman spectra were fitted by a Lorenzian line shape. The intensity (integrated area under the peak) was plotted as a function of the excitation wavelength. To account for wavelength dependent changes in the sensitivity of the Raman setup the Raman intensity was calibrated on benzonitrile that has a constant Raman cross section.<cit.>
§.§ Microabsorbance
The absorption spectra are recorded with a home built micro-absorbance setup <cit.> that is equipped with a broadband light source (NKT - FIU 15). The light is guided to an inverse microscope from Olympus where the light is focused by an 100x objective (NA = 0.9) onto the sample. The transmitted light (T) is collected by a second 100x objective (NA = 0.8) and afterwards guided through an optical fiber to an Avantes spectrometer. The reflected light (R) goes through a beam splitter, is collected by a collimator lens that couples the light into a fiber that guides the light to the spectrometer. The absorbance (A) is calculated from the reflection and transmission spectra as A = 100 % - R - T.
§.§ AFM Measurements
The AFM images were taken with an Cypher ES atomic force microscope from Asylum Research Oxford Instruments Inc. To remove possible organic contaminations of the tip, the AFM cantilevers were cleaned with argon plasma with Zepto instrument (Diener electronics Inc.) at 50% power for one minute. AFM imaging was performed in amplitude modulation, called also tapping, mode. The cantilever (qp-fast, 15 N/m, Nanosensors) was excited in resonance with its third eigenmode around 4.5 MHz with an amplitude of roughly less than one nanometer. Further imaging details are described in Ref. <cit.>. The AFM cell was continuously purged with dry nitrogen.
This work was supported by the European Research Council (ERC) under grant DarkSERS-772 108, the German Science Foundation (DFG) under grant Re2654/13 and Re2644/10, and the SupraFAB Research Center at Freie Universität Berlin. N.S.M. acknowledges support from the German National Academy of Sciences Leopoldina through the Leopoldina Postdoc Scholarship.
§.§ Spectrum of agglormerates of MePTCDI on hexagonal BN
§.§ Transition Dipole Moment
To calculate the transition dipole moment (μ)
μ = √(3ħ e^2/2m_eω_0f) = 8.7 D,
the oscillator strength (f) was estimated by
f = 4m_ecϵ_0/N_Ae^2ln(10)∫ϵ(ν)dν = 0.68,
where ħ is the reduced Planck constant, e the electric charge, m_e the electron mass, ω_0 the transition frequency, c the speed of light, ϵ_0 is the electrostatic constant, N_A the Avogadro constant, and ϵ(ν) molar absorption coefficient. The ϵ(ν) in Supplementary Fig <ref> was determined by the absorption of the MePTCDI molecules dissolves in chloroform.
|
http://arxiv.org/abs/2306.09836v1
|
20230616133238
|
Distributionally Robust Airport Ground Holding Problem under Wasserstein Ambiguity Sets
|
[
"Haochen Wu",
"Max Z. Li"
] |
math.OC
|
[
"math.OC"
] |
Microlayer in nucleate boiling seen as Landau-Levich film with dewetting and evaporation
Vadim S. Nikolayev*
July 31, 2023
=========================================================================================
The airport ground holding problem seeks to minimize flight delay costs due to reductions in the capacity of airports. However, the critical input of future airport capacities is often difficult to predict, presenting a challenging yet realistic setting. Even when capacity predictions provide a distribution of possible capacity scenarios, such distributions may themselves be uncertain (e.g., distribution shifts).
To address the problem of designing airport ground holding policies under distributional uncertainty, we formulate and solve the airport ground holding problem using distributionally robust optimization (DRO).
We address the uncertainty in the airport capacity distribution by defining ambiguity sets based on the Wasserstein distance metric. We propose reformulations which integrate the ambiguity sets into the airport ground holding problem structure, and discuss dicretization properties of the proposed model.
We discuss comparisons (via numerical experiments) between ground holding policies and optimized costs derived through the deterministic, stochastic, and distributionally robust airport ground holding problems.
Our experiments show that the DRO model outperforms the stochastic models when there is a significant difference between the empirical airport capacity distribution and the realized airport capacity distribution. We note that DRO can be a valuable tool for decision-makers seeking to design airport ground holding policies, particularly when the available data regarding future airport capacities are highly uncertain.
AAirport capacity; Airport ground holding problems; Air traffic management; Distributional uncertainty; Distributionally robust optimization; Wasserstein ambiguity set
§ INTRODUCTION
Demand-capacity imbalances in the air transportation system is a major problem that has significant negative impacts on airlines, passengers, and the environment. There are a variety of potential bottlenecks within the air transportation system, with airport arrival and departure capacities being major components <cit.>. Typically, if airport capacity constraints can be identified proactively, air traffic managers prefer to delay flights before they are airborne, as airborne delay costs (e.g., vectoring, holding) typically drastically outweigh delay costs incurred on the ground <cit.>. As a real-world example, such demand-capacity balancing actions take the form of Traffic Management Initiatives (TMIs) known as Ground Delay Programs (GDPs) within the US National Airspace System (NAS) <cit.>.
Given known parameters such as the airport capacities, nominal flight times between airports, and flight schedule information, the family of optimization models known as Ground Holding Problems (GHPs) can be solved to obtain optimal rescheduling decisions to minimize incurred airborne and ground delay costs <cit.>. GHPs are an effective approach to reduce the impact of congestion, delaying aircraft on the ground to alleviate en route sectors and terminal airspace, and avoid airborne vectoring or holding. Intuitively, the arrival and departure capacities of airports – which are directly influenced by probabilistic factors such as weather – are critical inputs into GHPs. This is true regardless if the scope of the GHP is at a single arrival airport (the Single Airport Ground Holding Problem, or SAGHP <cit.>), or encompasses a number of different airports (the Multi-Airport Ground Holding Problem, or MAGHP <cit.>).
Previous works have addressed the deterministic formulation of the SAGHP and MAGHP. In the deterministic SAGHP and MAGHP (d-SAGHP and d-MAGHP, respectively), the airport capacity or capacities are assumed to be known with certainty <cit.>. However, in practice, airport capacity is often uncertain and can vary significantly over time, indicating that deterministic GHPs may not be representative of realistic operations. Recognizing the role that uncertainty plays in designing realistic ground holding policies, stochastic versions of the SAGHP and MAGHP (s-SAGHP and s-MAGHP, respectively) have also been examined in previous work: techniques such as two-stage stochastic programming <cit.> and chance-constrained programming
<cit.> have been used to model uncertainty in airport capacities. The results of these models demonstrate that the stochastic models can reach a balance between the robustness and the potential cost of the derived ground holding policy, and provide with a lower airborne cost compared with deterministic ground holding program.
Even though stochastic GHPs represent significant advancements in the modeling and optimization of ground holding policies, a key component to stochastic GHPs is the process through which the probabilistic airport capacities is estimated. Forecasting and prediction models for weather conditions and runway configurations, two critical factors in determining airport capacities <cit.>, are commonly used to extract the probability distribution of possible airport capacity scenarios <cit.>. Focusing on previous work that propose airport capacity prediction models, <cit.> states that the performance of the model would be impacted by the weather forecast uncertainties. Similarly, prediction models from both <cit.> and <cit.> contain model estimation errors. Due to upstream uncertainties in factors such as weather conditions and runway configurations, the predicted airport capacity distributions may not be accurate. Thus, deterministic or stochastic GHPs may produce ground holding policies that are sub-optimal in practice. Moreover, suppose that stochastic GHPs can be accessed by air traffic managers (e.g., perhaps through a decision support system interface): In practice, these decision support systems may only have limited information to arrive at a probabilistic set of scenarios for the airport capacity inputs. Due to both upstream uncertainty and incomplete information, even probabilistic airport capacities have an added layer of uncertainty that should be taken into account.
The goal of this paper is to deal with the uncertainty-of-uncertainties challenges discussed above, formulating and solving a stochastic GHP that is robust to inaccuracies in the probabilistic airport capacity scenarios. Specifically, we propose an approach to formulating and solving the GHP via distributionally robust optimization (DRO). In the following sections, we first consider the reformulation of the SAGHP via DRO to illustrate the problem, before extending to the multi-airport case. We conduct several numerical experiments to demonstrate the out-of-sample performance of dr-SAGHP and dr-MAGHP compared with ground holding policies derived from the analogous d-SAGHP, d-MAGHP, s-SAGHP, and s-MAGHP.
§ LITERATURE REVIEW
§.§ Stochastic Air Traffic Flow Management
Some of the previous works have studied the Stochastic Air Traffic Flow Management(SATFM)<cit.>. <cit.> considers airport capacity and sector capacity in ATFM as random variables and takes ground holding delay,airborne holding delay and route selection as control variables. The proposed models are formulated as Two-stage stochastic integer programs and solved by a stage-wise Progressive Binary Heuristic algorithm. <cit.> formulates air traffic flow management problem with rerouting as multi-stage mixed 0-1 problems, assuming uncertainties from airport arrival and departure capacity, the air sector capacity and the flight demand. A deterministic equivalent model is derived from a tree-based schema, which is solved by a branch-and-cut method. <cit.> applies chance-constrained program to model the SATFM under sever weather conditions. The chance-constraints enforce the probability of each sector capacity being exceeded to be less than or equal to α and the model is solved by polynomial approximation-based approach. Faced with the uncertainty of storm arrival time, duration and its impact on airport capacity and sector capacity, <cit.> proposes a weather-front based method which introduces a low-dimension ambiguity set to capture the possible dynamics of the weather front and resulting capacity drops. The ambiguity set is later incorporated into the deterministic ATFM model, and the deterministic equivalent formulations of the robust and adaptive optimization model are derived.
§.§ Stochastic Airport Ground Holding Problem
In terms of stochastic airport ground holding problem, weather conditions and airport capacity are generally assumed to be the random variables and bad weathers are root causes for airport capacity drop. <cit.> introduces a perfect match between network-flow model for inventory and stochastic ground holding problem. The formulation of the network-flow model requires the estimations of future demand scenarios with corresponding probabilities and this can be modelled as a stochastic programming model. In stochastic GHP case, <cit.> is concerned with the probability Airport Acceptance Rates(AAR), which represents for the number of arrival flights an airport is able to receive given a specific period of time. The stochastic GHP is formulated in a network-flow model and solved by linear programming techniques. <cit.> considers uncertain weather clearance time, where airport capacity is reduced until the time period the weather condition is cleared. A stochastic IP is proposed and solved based on seven possible weather clearance time and different distributions of each scenario. <cit.> applies a scenario-tree to capture possible scenarios of airport capacity as weather condition changes. The authors represent a dynamic stochastic IP based on the scenario-tree, whose performance is impacted as the size of the tree grows. Similar with <cit.>, <cit.> applies chance-constrained techniques to stochastic MAGHP with uncertain airport capacity. The chance-constraints make sure the probability of capacity constraints being violated is less a small value α. In terms of the empirical distributions of airports in the metroplex, the authors assume airport landing-capacity distribution follows a log-concave distribution and makes the historical landing-capacity a continuous distribution by using kernel density estimation approach.
§.§ Distributionally Robust Optimization
Compared with previous works in stochastic ATFM and stochastic GHP, the distributionally robust models will consider all distributions within the ambiguity set, instead of only the empirical distribution derived by historical data. This would make DR models more robust when the true distribution of weather condition or airport capacity deviates from the empirical distribution.
In terms of previous works of distributionally robust optimization methods, two types of ambiguity sets have commonly used. The first type is moment-based ambiguity set<cit.>, where the mean of the delay distribution falls in an ellipsoid with the size of γ_1, and the second-moment matrix of ξ lie in a positive semi-definite cone constrained by γ_2. The mean and the covariance matrix of the predicted distribution would be taken as the nominal value which the moment-based ambiguity would build upon. However, some previous work states that the moment-based ambiguity is too conservative<cit.>. Comparatively, the second type is distance based ambiguity sets<cit.>, which has a powerful out-of-sample performance. One commonly used distance-based ambiguity set is the Wasserstein ambiguity set. The Wasserstein distance d_p(Q_1,Q_2) is used to measure the distance between an arbitrary distribution and the nominal distribution. By assigning a user-defined parameter ϵ, all the distributions satisfying the constraint d_p(Q_1,Q2) ≤ϵ would become the ambiguity set. After quantifying the ambiguity set, the original problem can be reformulated into a convex tractable counterpart and thus is able to be solved<cit.><cit.>.
For the application side of DRO methods, <cit.> considers the DR appointment scheduling problem, recasting the non-convex part of the generalized form of Wasserstein ambuigity set based models as co-positive programs. The co-positive program is amount to a tractable semi-definite program and the deterministic equivalent formulation for l_p norms(p=1 and p>1) based Wasserstein ambiguity set are also developed. <cit.> solves the vaccine allocation problem using both SP and momnet-based DRO methods. The deterministic equivalent formulation of the proposed DRO model is derived by dual transformation and strong duality, and the experiment results shows that the DRO approach gives the least amount of unsatisfied demand. <cit.> proposes a general formulation of two-stage distributionally robust mixed-integer programming (DRMIP) problem usnder the Wasserstein ambiguity set and develops dual-decomposition based solution method. The discretization properties of DRMIP have also been discussed, which proves the existence of the worst-case distribution for DRMIP.In this paper, we assume the airport capacity distribution is under Wasserstein ambiguity set and derive the deterministic equivalent formulation for both dr-SAGHP and dr-MAGHP.
§ MODEL DEVELOPMENT
DRO is a variation of stochastic optimization techniques that can handle uncertainty-of-uncertainties by defining an ambiguity set that represents a set of possible distributions for stochastic input parameters. Given this ambiguity set, we are able to derive the deterministic equivalent formulation of the distributionally-robust GHP (dr-SAGHP and dr-MAGHP for the single and multi-airport cases, respectively), which is computationally tractable. In this section, we mainly introduce the reformulation methods that derive the distributionally deterministic equivalent form of the deterministic SAGHP under the Wasserstein ambiguity set, and related properties of the distributionally model. The dr-SAGHP will be considered at first, and the corresponding multi-airport models will be formulated based on the single airport scenario.
§.§ Deterministic formulation of SAGHP
The notations for the proposed models are as follow: Let T be the set of time periods and t be the generic time period, F be the set of arrival flights and f be the generic flight, Z be the set of airport and z be the generic airport, and let F(z) be the set of arrival flights of airport z ∈ Z. 𝒞 represents for the set of connecting flights, which includes connecting flight pairs (f_1,f_2) ∈𝒞 where f_1 is the preceding flight and f_2 is the successive flight. r_f denotes the scheduled arrival time and T_f denotes the set of available actual arrival time to be assigned for each flight. Note that the assigned actual arrival time is supposed to be later than scheduled arrival time and no later than the planning horizon, and thus T_f = {r_f,...,T}. x_f,t,y_t are the first stage and second stage decision variables, where x_f,t is a binary variable indicating whether a flight f will land at time t and y_t(y_z,t in multi-airport cases) denotes the number of holding flights at airport z at time t.C_f and C_h are unit ground holding cost and unit airborne cost respectively. S_f_1,f_2 denotes the maximum delay that f_1 might suffer without causing any delay to successive connected flight f_2.
We take the deterministic formulation of the single airport ground holding program (SAGHP) proposed in <cit.> (the P_2 VBO model)as reference, where the assumptions are that the congestion is originated from insufficient arrival capacity and there is infinite departure capacity. The objective function is ground holding delay, and the constraints are capacity constraint, assignment constraint,and coupling constraint.
min_x ∑_f ∈F C_f (∑_t ∈T_ftx_tf - r_f)
s.t. ∑_f ∈Fx_ft ≤K, ∀t ∈T,
∑_t ∈T_fx_ft = 1, ∀f ∈F,
∑_t ∈T_f_1 tx_f_1,t - r_f_1 - S_f_1,f_2 ≤∑_t ∈T_f_2 tx_f_2,t - r_f_2, f_1,f_2 ∈𝒞,
x_ft ∈{0,1}, f ∈F, t ∈T_f.
The d-SAGHP aims to minimize the ground holding cost for all flights (<ref>), which is calculated as the unit ground holding cost times the difference between the scheduled arrival time and actual assigned arrival time for each flight. The capacity constraint (<ref>) ensures that the total number of arrival flights at time t cannot exceed the airport capacity K. The assignment constraint (<ref>) enforces that for each flight f, there is only one time slot t ∈ T_f assigned to it. The coupling constraint (<ref>) makes sure the ground holding delay placed on the preceding flight f_1 will not cause any delay for the successive connected flight f_2. If the assigned ground holding delay is too long so that the minimum turn around time is violated, then the departure time of the successive flight has to be delayed. The maximum delay is defined by the scheduled departure time of the successive flight minus the sum of the scheduled arrival time and the turnaround time of the preceding flight.
§.§ Wasserstein Ambiguity Set
In this paper, we consider the Wasserstein distance-based ambiguity set and denote M(Ξ) as the space of all probability distributions ℚ with support Ξ. The Wasserstein distance d_w : M(Ξ) × M(Ξ) →ℝ_≥ 0 is the minimum transportation cost between distributions ℚ_1 ∈ M(Ξ) and ℚ_2 ∈ M(Ξ) <cit.>, and is given explicitly as:
d_w(ℚ_1,ℚ_2) = inf_Π∈𝒟_Π(ξ_1,ξ_2 )∫_Ξ^2ξ_1 - ξ_2 Π(dξ_1,dξ_2),
where Π is a joint distribution of probability measures ξ_1 and ξ_2 with marginals ℚ_1 and ℚ_2, respectively. Π can also be viewed as a transportation plan for moving mass from ℚ_1 to ℚ_2. We denote 𝒟_Π(ξ_1,ξ_2) as the set of all joint distributions on ξ_1 and ξ_2 with marginals ℚ_1 and ℚ_2, and · is an arbitrary norm. In this paper, we use l_2 norm to construct the Wasserstein ambiguity set. Based on the Wasserstein distance, the ambiguity set as defined in <cit.> centered around an empirical distribution P with radius ϵ > 0, denoted as 𝒫_ϵ(P), is given by
𝒫_ϵ(P) {ℚ∈ M(Ξ) : d_w(P,ℚ) ≤ϵ}.
In the dr-SAGHP and the dr-MAGHP, let
{ξ_1,ξ_2, ,ξ_N } be the set of N airport capacity observations with the corresponding estimated probabilities of occurrence {p_1,p_2, ,p_N }.
Based on the empirical airport capacity distribution given by {ξ_1,ξ_2, ,ξ_N } and {p_1,p_2, ,p_N }, the Wasserstein ambiguity set can be built via (<ref>) with conditions (<ref>)-(<ref>) as follows:
∫_Ξ∑_s=1^Nu_s(ξ) ξ_s - ξ_2 dξ ≤ϵ, ϵ> 0,
∫_Ξu_s(ξ) dξ = p_s, ∀s = 1,2, ,N,
∑_s=1^N u_s(ξ) = P(ξ), ∀ξ∈Ξ,
u_s(ξ) ≥0, ∀ξ∈Ξ, s = 1,2, , N.
According to (<ref>) and (<ref>), the ambiguity set is
𝒫_ϵ(P) {ℚ∈ M(Ξ) : ∫_Ξ^2ξ - ξ'_2 Π(dξ,dξ') ≤ϵ},
where ξ and ξ' are probability measures with marginals of ℚ and the empirical distribution P. Π is assumed to be an optimal joint distribution of ℚ and P or the optimal transportation plan. In order to deal with the joint distribution Π(dξ,dξ'), we apply a similar approach proposed in <cit.>, introducing ℚ_s that represents for the conditional distribution of ξ given that ξ' = ξ_s. Therefore, we denote the joint distribution of ℚ and ℙ as Π=∑_s=1^N p(ξ' = ξ_s)ℚ_s,with ℚ_s=p(ξ|ξ' = ξ_s). We then reformulate the ambiguity set as:
𝒫_ϵ(P) {ℚ∈ M(Ξ) : ∑_s=1^N p(ξ' = ξ_s)∫_Ξξ - ξ_s_2ℚ_s dξ≤ϵ},
We introduce an auxiliary variable u_s(ξ) = p(ξ' = ξ_s)ℚ_s, and we can derive (<ref>) by replacing p(ξ' = ξ_s)ℚ_s with u_s(ξ), where ∫_Ξξ - ξ_s_2∑_s=1^Np(ξ' = ξ_s)ℚ_s = ∫_Ξ∑_s=1^Nu_s(ξ) ξ_s - ξ_2dξ. Moreover, with the introduced variable u_s(ξ), we can derive that:
∫_Ξu_s(ξ)dξ = ∫_Ξp(ξ' = ξ_s)ℚ_sdξ
= ∫_Ξp(ξ' = ξ_s)p(ξ|ξ' =ξ_s)dξ
=p(ξ'=ξ_s)∫_Ξp(ξ|ξ'=ξ_s)dξ
= p(ξ' = ξ_s) · 1 = p(ξ' = ξ_s),
and based on the law of total probability we can also derive that:
∑_s=1^Nu_s(ξ) = ∑_s=1^Np(ξ'=ξ_s)p(ξ|ξ'=ξ_s) = P(ξ).
Let p_s = p(ξ' = ξ_s) and P(ξ) be the probability measure of each ξ∈Ξ such that ∫_ΞdP(ξ) = 1, then we can drive (<ref>) and (<ref>)from (<ref>) and (<ref>) respectively. (<ref>) ensures the introduced variable u_s(ξ) is not negative.
§.§ Deterministic Equivalent Reformulation of dr-GHPs
We begin with examining the distributionally-robust formulation for the single-airport case, i.e., dr-SAGHP. First, we assume that the airport capacity K is a non-negative random variable. We can write down the following two-stage formulation of dr-SAGHP as follows:
min_x { ∑_f ∈F C_f . . (∑_t ∈T_ftx_ft - r_f) + max_p ∈𝒫_ϵ(P) 𝔼_p[Q(x,ξ)]}
s.t. ∑_t ∈T_fx_ft = 1, ∀f ∈F,
∑_t ∈T_f_1 tx_f_1,t - r_f_1 - S_f_1,f_2 ≤∑_t ∈T_f_2 tx_f_2,t - r_f_2, x_ft ∈{0,1}, f_1,f_2 ∈𝒞,
where the value function Q(x, ξ) = min_y∑_t ∈ T C_hy_t is itself a minimization problem (i.e., the second stage):
min_y ∑_t ∈T C_hy_t (ξ)
s.t. ∑_f ∈Fx_ft ≤K(ξ) - y_t-1(ξ) + y_t(ξ), ∀t ∈T, ξ∈Ξ,
y_0(ξ) = 0, ∀ξ∈Ξ,
y_t(ξ) ≥0, ∀f ∈F, t ∈T, ξ∈Ξ.
The objective function for the first stage (<ref>) is the sum of ground holding delays for all flights. Note that the objective value of the inner maximization problem will be realized in the second stage. The capacity constraint (<ref>) and coupling constraint (<ref>) are the same as what we have discussed for d-SAGHP , and they are taken as first stage constraints in the two-stage formulation. This means both assignment constraint and coupling constraint should be satisfied in the first stage even if we have no information about the true airport capacity distribution.
In the second stage problem (<ref>)-(<ref>), we aim to minimize the airborne delay cost, which is given by the unit airborne holding cost times the total number of flights under airborne holds at time t. As for the constraints of the second stage problem, (<ref>) is the second stage capacity constraint which ensures that the total number of arrival flights at time t cannot exceed the realized airport capacity K(ξ) plus the total number of airborne flight at time t, minus the total number of airborne flights at time t-1. Intuitively, in the second stage capacity constraint, all airborne holding flights at time t-1 should land at time t , and the maximum number of allowed arrival flights equals the realized airport capacity, minus the number of airborne flight at t-1, plus the number of airborne flights at t. Constraint (<ref>) ensures that the number of airborne flights is zero at t=0 for each scenario , and constraint (<ref>) ensures that the number of airborne flights at each time period cannot be negative.
The second stage decision variable y_t represents the number of arrival flights joining the arrival queue or conducting airborne holding around the terminal area.. Note that we assume the magnitude of the unit cost of airborne delays is greater than that for ground delays.
To find out the deterministic equivalent form of dr-SAGHP, we need to deal with the inner maximization problem in (<ref>). We apply Lagrangian dual technique to transform the inner maximization problem of (<ref>) to a minimization problem, which can be integrated into (<ref>) to derive the deterministic equivalent formulation.
The inner maximization problem of (<ref>) can be reformulated as a semi-infinite program:
min_α,β ϵα+ ∑_s=1^Np_sβ_s
s.t. αξ_s - ξ_2 +β_s ≥∑_t ∈TC_h y_t, ∀ξ∈Ξ, ∀t ∈T, s = 1,2, ,N.
By integrating all conditions of the Wasserstein ambiguity set 𝒫_ϵ(P) into the inner maximization problem of (<ref>), a semi-infinite linear programming can be represented as follow:
max_u_s(ξ) ∫_Ξ∑_s=1^N ∑_t ∈TC_hy_t(ξ)u(s) dξ
s.t. ∫_Ξ∑_s=1^Nu_s(ξ) ξ_s - ξ_2 dξ ≤ϵ,
∫_Ξ u_s(ξ) dξ = p_s, ∀s=1,2, , N,
∫_Ξ∑_s=1^N u_s(ξ) dξ =1.
The objective function (<ref>) is derived from
max_p ∈𝒫_ϵ(P) 𝔼_p[Q(x,ξ)] = max_p ∈𝒫_ϵ(P)∫_Ξ∑_t ∈ TC_hy_t(ξ)p dξ
= max_u_s(ξ)∫_Ξ∑_s=1^N∑_t ∈ TC_hy_t(ξ)u_s(ξ)dξ
,
where u_s(ξ) is the variable we introduced for constructing the ambiguity set(<ref>)-(<ref>). (<ref>), (<ref>),(<ref>) are derived from (<ref>),(<ref>),(<ref>) respectively.
By assigning Lagrangian dual variables α and β_s to (<ref>) and (<ref>), respectively, we construct the Lagrangian function L(u,α,β). Note that since we can derive (<ref>) from (<ref>) ∫_Ξ∑_s=1^N u_s(ξ) dξ = ∑_s=1^N∫_Ξu_s(ξ) dξ= ∑_s=1^Np_s = 1, then (<ref>) can be eliminated from the Lagrangian function. The Lagrangian function L(u,α,β) can be written as:
L(u,α,β) = max_u_s(ξ){∫_Ξ∑_s=1^N∑_t ∈ T C_h y_t(ξ) u_s(ξ) dξ - α(∫_Ξ∑_s=1^N u_s(ξ) ξ_s - ξ_2 dξ - ϵ) .
. - ∑_s=1^N β_s (∫_Ξu_s(ξ) dξ - p_s ) }
= αϵ + ∑_s=1^N p_sβ_s + max_u_s(ξ)∫_Ξ( ∑_s=1^N∑_t ∈ T C_h y_t(ξ) u_s(ξ) - α∑_s=1^N u_s(ξ) ξ_s - ξ_2 - ∑_s=1^N β_s u_s(ξ) ) dξ.
For the Lagrangian function L(u,α,β), based on the fact that the maximization problem can be decomposed for each ξ∈Ξ, we can write down the Lagrangian dual g(α,β) as follows:
g(α,β) = min_α,β{αϵ + ∑_s=1^N p_sβ_s + max_u_s(ξ){ u_s(ξ) (∑_s=1^N ∑_t ∈ TC_h y_t(ξ) - α∑_s=1^N ξ_s - ξ_2 - ∑_s=1^N β_s ) }}.
The Lagrangian dual g(α,β) is a minimization problem with α and β_s as decision variables. To ensure that g(α,β) is bounded, the inner maximization problem with the optimal u_s(ξ) needs to be zero, and ∑_s=1^N∑_t ∈ TC_h y_t(ξ) - αξ_s - ξ_2 - ∑_s=1^N β_s needs to be negative. Therefore, by introducing αξ_s - ξ + ∑_s=1^N β_s ≥∑_s=1^N ∑_t ∈ T C_h y_t(ξ), ∀ s = 1,2, ,N as constraints for g(α,β), the Lagrangian dual problem can be written as follows:
g(α,β) = min_α,β αϵ+ ∑_s=1^N p_sβ_s
s.t. α∑_s=1^N ξ_s - ξ_2 + ∑_s=1^N β_s ≥N∑_t ∈TC_h y_t(ξ).
Finally, by transforming the summation notation into N scenarios, the Lagrangian dual is the same as in (<ref>).
The optimal value of the Lagrangian dual problem is the best lower bound of the primal problem, and thus the optimal we obtain from the dual problem is less than or equal to the optimal value of primal problem. Only when strong duality holds, one can conclude that the optimal value of the dual and primal problems are equivalent to each other. In order to show that the strong duality holds for the dual problem in (<ref>), we need to introduce the definitions of relative interior and Slater's condition.
The relative interior of a set C is denoted as
𝐫𝐞𝐥𝐢𝐧𝐭(C) = { x ∈ C | B(x,r) ∩𝐚𝐟𝐟(C)⊆ C},
where 𝐚𝐟𝐟(C) is the affine hull of set C and B(x,r) is a ball with radius r centered at x.
Slater's condition states that for a primal problem of the form
min f_0(x)
s.t. f_i(x) ≤ 0, ∀ i = 1,2, , m,
Ax = b,
with the domain 𝒟 of f_i(x) given by 𝒟 = ∩_i=1^m dom(f_i), there exists an x ∈𝐫𝐞𝐥𝐢𝐧𝐭(𝒟) such that f_i(x) < 0, i = 1,2, , m and Ax = b.
Strong duality holds for the semi-infinite program in (<ref>).
According to <cit.>, if the primal problem is convex and satisfies Slater's condition, then strong duality holds for the primal problem. Firstly, for (<ref>), the objective function is an affine function of the radius ϵ of the ambiguity set, continuous first stage decision variables α and β, as well as the probability p_s for each scenario. The constraints of (<ref>) are also convex, since the term with the norm and the sum of airborne delay cost are convex. Therefore, (<ref>) is a convex optimization problem.
Moreover, Slater's condition requires a strictly feasible x ∈𝐫𝐞𝐥𝐢𝐧𝐭(𝒟). In (<ref>), α≥ 0 and β_s ∈ℝ for all scenarios s = 1,2, ,N. Therefore, there exist large enough α, β_s ∈𝐫𝐞𝐥𝐢𝐧𝐭𝒟, ∀ s = 1,2, ,N such that αξ_s - ξ_2 + β_s > f(ξ), ∀ s = 1,2, ,N, and thus Slater's condition holds for (<ref>). Since this problem is convex and Slater's condition holds, strong duality follows for (<ref>).
Since the strong duality holds for the inner maximization problem, we can then incorporate the dual problem, which is a minimization problem, into the two stage model of dr-SAGHP.
The deterministic equivalent form of (<ref>) with the Wasserstein ambiguity set (<ref>) can be reformulated as follows:
min_x,y,α,β ∑_f ∈F ( C_f ∑_t ∈T_f (t x_tf - r_f ) ) + ϵα+∑_s = 1^N p_sβ_s
s. t. αξ_s - ξ_2 +β_s ≥∑_t ∈TC_h y_t(ξ), ∀ξ∈Ξ, s = 1,2, ,N,
∑_t ∈T_fx_ft = 1, ∀f ∈F,
∑_f ∈Fx_ft ≤K(ξ) - y_t-1(ξ) + y_t(ξ), ∀t ∈T, ξ∈Ξ,
∑_t ∈T_f_1 tx_f_1,t - r_f_1 - s_f_1,f_2 ≤∑_t ∈T_f_2 tx_f_2,t - r_f_2, x_ft ∈{0,1}, f ∈F, t ∈T_f,
y_0(ξ) = 0, y_t(ξ)≥0, α ≥0, ∀t ∈T, ξ∈Ξ.
The semi-infinite program <ref> could be formulated with the epigraph θ as:
min_x,y,θ∈ℝ ∑_f ∈F ( C_f ∑_t ∈T_f (tx_ft - r_f ) ) + θ
s. t. θ ≥max_ℙ ∈𝒫𝔼_ℙ[Q(x,ξ)],
∑_t ∈T_f x_ft = 1, ∀f ∈F,
∑_t ∈T_f_1 x_f_1,t - r_f_1 - s_f_1, f_2 ≤∑_t ∈T_f_2 tx_f_2,t - r_f_2, ∀t ∈T_f,
x_ft ∈{0,1}, ∀f ∈F, t ∈T_f.
According to Lemmas <ref> and <ref>, constraint (<ref>) is equivalent to:
θ≥𝔼_ℙ[Q(x,ξ)] , ∀ℙ ∈𝒫_ϵ(P)
ξ_s - ξ_2α+β_s ≥∑_t ∈TC_hy_t, ∀ξ∈Ξ
By integrating (<ref>) and (<ref>) into (<ref>), the derived deterministic equivalent formulation matches (<ref>).
Note that since the equivalent form (<ref>) is a semi-infinite program, it is very difficult to solve due to the fact that there is a large amount of constraints resulting from the generic support Ξ(the infinite support). However, specific to the airport ground holding problem, the support would be all possible values that the airport capacity could attain. Typically in real-world operations, these capacity values are non-negative integers representing the total number of flights allowed to land during a given time. In terms of the optimization model, this allows us to discretize the support, and reduce the amount of constraints, thus resulting in a tractable DR model. Therefore, being able to reduce and discretize the deterministic equivalent formulation is important to solving the dr-SAGHP and dr-MAGHP problems.
§.§ Discretization Properties of dr-GHPs
The tricky part of deterministic equivalent formulations of dr-GHPs is the infinite support Ξ, which can cause infinite constraints and thus make dr-GHPs intractable. Therefore, we need to discretize and reduce the infinite support at first, and then derive the optimal ground holding policy. The deterministic equivalent formulation of dr-SAGHP is a semi-infinite program, hence we can determine the discretization properties of the deterministic equivalent formulation of dr-SAGHP with approaches for semi-infinite programs. To show the discretization properties of dr-SAGHP, we need to introduce finite reducibility and weak discretization of semi-infinite programs at first.
Denote by Ξ_k a finite subset of support Ξ. A semi-infinite program is said to be finitely reducible if v(Ξ_k) = v(Ξ), where v(Ξ) is the optimal value of the semi-infinite program with support Ξ.
A semi-infinite program is said to be weakly discretizable if there exists a sequence of Ξ_k's such that lim_k →∞v (Ξ_k ) = v(Ξ).
An optimization problem 𝒮 is said to be solvable if 𝒮 is feasible and bounded(i.e. 𝒮 has a finite optimal solution).
Then, before showing that (<ref>) is finitely reducible and weakly discretizable, we need to demonstrate these properties of (<ref>) at first.
(<ref>) is finitely reducible and weakly discretizable.
From Theorem 7 in <cit.>, for a semi-infinite program 𝒮 with optimal value 𝒮^* and its dual problem 𝒟 with optimal value 𝒟^*, the following statements are equivalent:
* 𝒮 is finitely reducible;
* 𝒟 is solvable and 𝒟^* = 𝒮^*;
* 𝒟 is solvable and 𝒮 is weakly discretizable.
For (<ref>), as discussed in Lemma <ref>, there exist α and β_s for all s = 1,2, ,N such that (<ref>) is strictly feasible. This means that the feasible region of (<ref>) is non-empty and the objective value is bounded. Based on the duality theorem, the dual of (<ref>) is feasible. Therefore, applying Theorem 7 from <cit.>, we conclude that (<ref>) is finitely reducible and weakly discretizable.
The discretization properties of (<ref>) is not obvious. In order to show (<ref>) is finitely reducible and weakly discretizable, we need to first re-construct the discretized Wasserstein ambiguity set based on the fact that (<ref>) is finitely reducible and weakly discretizable. Then, we could derive the discretized deterministic equivalent formulation of the two stage dr-SAGHP based on the discretized ambiguity set, and then show that (<ref>) is finitely reducible and weakly discretizable.
(<ref>) is finitely reducible and weakly discretizable.
We showed through Lemma <ref> that (<ref>) is finitely reducible. Based on Definition <ref>, there exists a subset Ξ_k ⊂Ξ such that v(Ξ_k) = v(Ξ). Therefore, the Wasserstein ambiguity set can be discretized as:
∑_ξ∈Ξ_k∑_s=1^Nu_s(ξ) ξ_s - ξ_2 ≤ϵ, ϵ> 0,
∑_ξ∈Ξ_ku_s(ξ) = p_s, ∀s = 1,2, ,N,
∑_s=1^N u_s(ξ) = P(ξ_k), ∀ξ∈Ξ_k,
u_s(ξ) ≥0, ∀ξ∈Ξ_k, s = 1,2, , N.
According to Proposition <ref>, the deterministic equivalent form based on the discretized ambiguity set can be derived as follows:
min_x,y,α,β ∑_f ∈F ( C_f ∑_t ∈T_f( tx_tf - r_f ) ) + ϵα+∑_s = 1^N p_sβ_s
s. t. αξ_s - ξ_2 + β_s ≥∑_t ∈TC_h y_t(ξ), ∀ξ∈Ξ_k, s = 1,2, , N,
∑_t ∈T_f x_ft = 1, ∀f ∈F,
∑_f ∈F x_f,t ≤K(ξ) - y_t-1(ξ) + y_t(ξ), ∀t ∈T, ξ∈Ξ_k,
∑_t ∈T_f_1 tx_f_1,t - r_f_1 - s_f_1, f_2 ≤∑_t ∈T_f_2 tx_f_2,t - r_f_2, ∀f ∈F, ∀t ∈T_f, x_ft ∈{0, 1},
y_0(ξ) = 0, ∀t ∈T, ∀ξ∈Ξ_k, y_t(ξ) ≥0, α≥0.
Since the deterministic equivalent form (<ref>) shares the same objective value as its discretized counterpart (<ref>) based on the fact that (<ref>) is finitely reducible and weakly discretizable, we conclude that <ref> is also finitely reducible and thus weakly discretizable.
Proposition <ref> showed that for the deterministic equivalent formulation, there exists a subset Ξ_k of Ξ such that the objective value under the discretized ambiguity set would be the same as (<ref>). We can now show, via Corollary <ref>, the existence of a worst-case scenario (i.e., airport capacity distribution) for dr-SAGHP and dr-MAGHP.
Given an arbitrary subset of the infinite support Ξ_m ⊂Ξ, let z(Ξ_m ) be the objective value of the discretized dr-SAGHP and z(Ξ_k ) be the objective value of the dr-SAGHP with finitely reduced support Ξ_k ⊂Ξ. Then, z(Ξ_m ) ≤ z(Ξ_k ).
Suppose that Ξ_k is the subset of the infinite support that gives the same objective value as Ξ. Then, based on Proposition <ref>, we have that z(Ξ_k ) = z(Ξ). For z(Ξ_m ), there are three cases:
* Case 1: Ξ_k ⊂Ξ_m;
* Case 2: Ξ_m ⊂Ξ_k;
* Case 3: Ξ_m ⊄Ξ_k and Ξ_k ⊄Ξ_m.
For Case 1, since the dr-SAGHP is weakly discretizable, then lim_m →∞ z(Ξ_m ) = z(Ξ) = z(Ξ_k), i.e., for any superset Ξ_m of Ξ_k, we have that z(Ξ_m ) = z(Ξ_k ). For Case 2, since Ξ_m ⊂Ξ_k, the dr-SAGHP with Ξ_m is a relaxation problem of dr-SAGHP with Ξ_k. For the relaxation of a minimization problem, the objective value will be less than or equal to the original problem, and thus z(Ξ_m ) ≤ z(Ξ_k ). Finally, for Case 3, the dr-SAGHP with Ξ_m is also a relaxation of the dr-SAGHP with Ξ, then z(Ξ_m ) ≤ z(Ξ) = z(Ξ_k ). Therefore, given the arbitrary subset support Ξ_m, we have that z(Ξ_m)≤ z(Ξ_k), as desired.
Practically, we note that it is probably difficult to locate the perfect subset support which gives the same objective value as the generic support. Hence, following from Corollary <ref>, if the chosen discretized support Ξ_m is a subset of Ξ_k, then a lower objective value will be acquired, and the objective value will decrease as the subset Ξ_m shrinks in size. When Ξ_m is large, the objective value of dr-SAGHP with Ξ_m would be close to the dr-SAGHP with infinite support, but at the cost of introducing more constraints. When a smaller subset support Ξ_mis chosen, although the model will be computationally easier to solve (due to the reduced number of constraints), the derived ground holding policy may not be robust, i.e., if the realized airport capacity does not belong in the chosen subset. This provides an interesting trade-off which hinges on the choice of Ξ_m for dr-SAGHP and dr-MAGHP: There is a trade-off between the complexity and the robustness of the distributionally-robust versions of the ground holding problem. An open question, to be explored in future work, revolves around the selection of an appropriate support for airport capacity distributions. Such a selection heuristic will likely be data-driven, based on historical observations of, e.g., airport capacity profile evolution, dynamics, and trends (see, e.g., <cit.>) or factors that directly impact airport capacities, such as the airport runway configuration (see, e.g., <cit.>).
§.§ Multi-Airport Ground Holding Problem
We now expand the problem to include the case with multiple airports. In order to extend the dr-SAGHP to the dr-MAGHP, we assume that for each airport z ∈ Z within a network of airports, the airport capacity distribution is contained within a Wasserstein ambiguity set. Note that this does not change the reformulation methods from Proposition <ref>; the implementable version of the dr-MAGHP can be derived analogously, as had been done for the dr-SAGHP. To simplify the multiple airport case, we also assume that the airport capacity distributions are independent from each other: We anticipate relaxing this assumption in future work through, e.g., estimating joint distributions through copula distributions which account for heterogeneous marginal airport distributions and non-linear correlation structures. Under these assumptions, each airport capacity distribution will have a corresponding discretized Wasserstein ambiguity set based on its empirical capacity distribution. The deterministic equivalent formulation of the dr-MAGHP is given as follow:
min_x,y,α_,β ∑_z ∈Z ∑_f ∈F(z) C_f ( ∑_t ∈T_f(z) tx_tf - r_f ) + ϵ∑_z ∈Zα_z + ∑_z ∈Z∑_s = 1^N p_z,sβ_z,s
s. t. α_z ξ_z,s - ξ_2 +β_z,s ≥∑_t ∈T C_h y_s,t(ξ), ∀ξ∈Ξ_k, ∀z ∈Z, s = 1,2, , N,
∑_t ∈T_f(z) x_ft = 1, ∀f ∈F, z ∈Z,
∑_f ∈F(z) x_f,t ≤K_z(ξ) - y_z,t-1(ξ) + y_z,t(ξ), ∀t ∈T, ξ∈Ξ_k, z ∈Z,
∑_t ∈T_f_1 tx_f_1,t - r_f_1 - s_f_1,f_2 ≤∑_t ∈T_f_2 tx_f_2,t - r_f_2, x_ft ∈{0,1}, f_1,f_2 ∈𝒞,
y_z,0(ξ) = 0, y_z,t(ξ)≥0, α_z ≥0, ∀t ∈T, ξ∈Ξ_k, z ∈Z.
The objective function (<ref>) is the sum of ground holding delay for all airports and the objective function of the dual of the inner maximization problem of the two stage dr-MAGHP. α_z and β_z,s are dual variables for the ambiguity set of each airport. Similar as what we showed in Lemma<ref>, (<ref>) ensures the Lagrangian functions of the inner maximization problems for each airport are bounded. (<ref>) makes sure for each flight at each airport, there is only one time slot for arrival being assigned. (<ref>) is the second stage airborne constraint, which means for each airport the number of arrival flights should be less than or equal to the realized airport capacity at time t plus the number of airborne flight at time t and then minus the number of airborne flight at time t-1. (<ref>) is the coupling constraint, ensuring that for each connected flight pair 𝒞, the ground holding policy placed on preceding flight f_1 will not cause any delay for the successive flight f_2. (<ref>) requires that the number of airborne flight at time t_0 at each airport is zero, and that the number of airborne flight for each time period and the dual variable α_z for each airport are not negative.
§ COMPUTATIONAL EXPERIMENTS AND RESULTS
In this section, several experiments are implemented to demonstrate the performance of the dr-SAGHP and dr-MAGHP when the distribution of airport capacities are different from empirical distributions, which in other words the out-of-sample performance of DR models.
§.§ Experiment setup
The flight schedule arrival and actual arrival time are collected from Bureau of Transportation Statistics(BTS). When testing the performance of dr-SAGHP and dr-MAGHP, historical data of Hartsfield-Jackson Atlanta International Airport(ATL) has been taken for dr-SAGHP and data of 30 airports from "FAA core 30" have been collected for testing dr-MAGHP. To derive the empirical distribution and realized distribution of airport capacity, we take the actual throughput as an approximation of airport capacity and let {ξ_1,ξ_2, ,ξ_N } be all the possible throughput of a single time slot obtained from operational data and {p_1, p_2, , p_N } be the corresponding probability for each throughput. Moreover, the empirical airport distribution is built based on the operational data of a single day, and the realized capacity distribution is from a whole month operational data. By doing so, we could train DR models with smaller data sets and inspect their out-of-sample performance on the testing data sets, which is aligned with practical airport operation where limited amount data can be utilized by traffic manager to make a decision.To calibrate the size of the Wasserstein ambiguity set, a discrete set of Ω has been constructed for candidate radius of the ambiguity set. For each ϵ∈Ω, samples with different sizes are generated to test the performance of the ground holing policy derived by the deterministic GHP, stochastic GHP and DR GHP.
§.§ Results for dr-SAGHP
Based on the theory of stochastic programming, under the empirical distribution the SP model will generate the best solution. Also, when ϵ is given a relatively small value, the radius of the Wasserstein ball is small so that the family of distribution in the ambiguity set are very close to the empirical distribution. When evaluating the total cost of the ground holding policy derived by d-SAGHP, we consider the average airport capacity as the deterministic model for d-SAGHP. Also, the discertized support we take is a increasing array from the minimum capacity to the maximum capacity with the step of one flight. According <ref>, the figure shows that when ϵ is less than 0.75, dr-SAGHP would generate the same ground holding policy as the s-SAGHP. As epsilon increases, the total cost of dr-SAGHP of is larger than that of s-SAGHP and s-SAGHP, and SP-SAGHP has the lowest cost on empirical distribution. The results are aligned with the theory and reasonable for the empirical distribution. When evaluating the out-of-sample performance of dr-SAGHP on the test set, evaluation functions are built to calculate the total cost for ground holding policies provided by each model. We take samples with sample size N = {50,100,500,100} to observe the performance of each model.
According to <ref> and <ref>, when ϵ is small, the total cost and standard deviation of dr-SAGHP and SP-SAGHP are the same. When ϵ is larger than 0.04, the total cost of dr-SAGHP decreases dramatically and is lower than that of SP-SAFHP and the det-SAGHP. Also, when N = 50,100,500,1000, the average cost of det-SAGHP is lower than that of SP-SAGHP. This means, for SAGHP, if there is a significant difference between the empirical distribution and the true of airport capacity, then the ground holding policy derived by the deterministic model might cause lower cost than the SP model.
§.§ Results for dr-MAGHP
Similar with experiments for SAGHP, we take average airport capacity of each airport in the network as the deterministic capacity, and take the increasing array from the minimum capacity to the maximum capacity as the support for each airport.
<ref> shows that, compared with dr-SAGHP, dr-MAGHP is more sensitive to the selection of ϵ. On the empirical distribution, the total cost of dr-MAGHP equals to s-MAGHP when ϵ = 0, and the total cost keeps growing as ϵ increases. The performance of dr-MAGHP on empirical distribution is aligned with the result of dr-SAGHP and stochastic programming theory.
The performance of dr-MAGHP on testing distributions is not the same as that of dr-SAGHP, where after a specific ϵ the total cost will remain the same. For dr-MAGHP as shown <ref>, as ϵ grows, the performance of dr-MAGHP will decrease at first and keep increasing again when the radius of the Wasserstein ball is larger than a specific value. We take two sets of airport capacity samples with size of 100, and for both cases the dr-MAGHP has the lowest cost when ϵ = 0.5. Compared with standard deviation of dr-MAGHP shown in <ref>,the standard deviation of s-MAGHP in the two samples are 29580.37 and 25010, and that of d-MAGHP are 26312.05 and 23301.61.
In order to inspect the performance of dr-MAGHP when ϵ = 0.5, we take 10 samples with size ranging from 1 to 10 and apply the first quartile and the third quartile to demonstrate variance in total cost. <ref> displays that, when ϵ = 0.5, the overall performance of dr-MAGHP on testing distribution is better than s-MAGHP, and d-MAGHP has the highest total cost on testing set. Also, for each sample size of the dr-MAGHP with ϵ = 0.5, dr-MAGHP has the lowest variance. This means if a appropriate ϵ is selected for dr-MAGHP and the realized distribution is different from the empirical distribution, dr-MAGHP can give decision makers a more robust ground holding policy that reduces the total cost.
§ CONCLUSION
In this paper, we have presented a novel approach for optimizing airport ground holding policies using distributionally robust optimization (DRO). Our work builds on previous works in ground holding problem and stochastic ground holding problem, which have shown that stochastic programming (SP) can provide effective solutions for airport ground holding problems but requires detailed and accurate knowledge of the underlying distribution of airport capacity. In contrast, DRO can handle situations where this distribution is uncertain or rapidly changing by defining an ambiguity set of probability distributions and optimizing for the worst-case scenario within this set. We use the Wasserstein ball ambiguity set to define the set of probability distributions that we consider plausible. By using this metric, we can construct a distributionally robust optimization problem that accounts for the uncertainty in the airport capacity. The Lagrangian dual transformation is used to convert the DR-GHP under the Wasserstein set into an equivalent deterministic formulation. The discretization properties of semi-infinite programs make sure the existence of the worst-case distribution and enable us to solve the deterministic equivalent form of dr-SAGHP with discretized support. The experiment results of both dr-SAGHP and dr-MAGHP demonstrate that when epsilon is very small the performance of DR models are very close to or even the same as SP models. Also,the out-of-sample performance of DR models outperform SP models and deterministic models when specific values of Wasserstein radius ϵ are selected, which is shown by sampling airport capacities from the testing set. Our findings have important implications for airport ground holding policies in practice. In real-world operations, the true distribution of airport capacity is often unknown or rapidly changing, making it difficult to apply the SP model. Our work shows that the DRO model can be more robust in these situations and can help to reduce the total cost induced by ground holding delay and airborne delay. For future works, instead of approximating airport capacity only based on historical airport throughput, we would apply machine learning based methods to predict airport capacity distributions. Based upon the predicted distribution, we will be able to construct a ambiguity set around it and implement dr-MAGHP to found out the performance of it under practical airport operations and seek out the importance of distribution robust models in air transportation systems.
|
http://arxiv.org/abs/2306.02418v1
|
20230604175020
|
ContraBAR: Contrastive Bayes-Adaptive Deep RL
|
[
"Era Choshen",
"Aviv Tamar"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] |
[
ContraBAR: Contrastive Bayes-Adaptive Deep RL
Era Choshentechnion
Aviv Tamartechnion
technionTechnion, Haifa, Israel
Era [email protected]
0.3in
]
In meta reinforcement learning (meta RL), an agent seeks a Bayes-optimal policy – the optimal policy when facing an unknown task that is sampled from some known task distribution. Previous approaches tackled this problem by inferring a belief over task parameters, using variational inference methods. Motivated by recent successes of contrastive learning approaches in RL, such as contrastive predictive coding (CPC), we investigate whether contrastive methods can be used for learning Bayes-optimal behavior. We begin by proving that representations learned by CPC are indeed sufficient for Bayes optimality.
Based on this observation, we propose a simple meta RL algorithm that uses CPC in lieu of variational belief inference.
Our method, ContraBAR, achieves comparable performance to state-of-the-art in domains with state-based observation and circumvents the computational toll of future observation reconstruction, enabling learning in domains with image-based observations. It can also be combined with image augmentations for domain randomization and used seamlessly in both online and offline meta RL settings.
§ INTRODUCTION
In meta reinforcement learning (meta RL), an agent learns from a set of training tasks how to quickly solve a new task, sampled from a similar distribution as the training set <cit.>. A formal setting for meta RL is based on the Bayesian RL formulation, where a task corresponds to a particular Markov decision process (MDP), and there exists some prior distribution over MDPs <cit.>. Under this setting, the optimal meta RL policy is well defined, and is often referred to as a Bayes-optimal policy <cit.>.
In contrast to the single MDP setting, where an optimal policy can be Markovian – taking as input the current state and outputting the next action, the Bayes-optimal policy must take as input the whole history of past states, actions, and rewards, or some sufficient statistic of it <cit.>. A popular sufficient statistic is the belief – the posterior probability of the MDP parameters given the observed history. For small MDPs, the belief may be inferred by directly applying Bayes rule, and approximate dynamic programming can be used to calculate an approximately Bayes-optimal policy <cit.>. However, this approach quickly becomes intractable for large or continuous MDPs.
Recently, several studies proposed to scale up belief inference using deep learning, where the key idea is to leverage a variational autoencoder (VAE, ) formulation of the problem, in which the posterior is approximated using a recurrent neural network <cit.>. While this approach has demonstrated impressive results on continuous control benchmarks <cit.>, it also has some limitations. Training a VAE is based on a reconstruction loss, in this case, predicting the future observations given the current history, which can be difficult to optimize for visually rich observations such as images.
Furthermore, variational algorithms such as VariBAD <cit.> reconstruct entire trajectories, restricting application to image-based domains due to memory limitations.
As an alternative to VAEs, contrastive learning has shown remarkable success in learning representations for various domains, including image recognition and speech processing <cit.>. Rather than using a reconstruction loss, these approaches learn features that discriminate between similar observations and dissimilar ones, using a contrastive loss such as the InfoNCE in contrastive predictive coding (CPC, ). Indeed, several recent studies showed that contrastive learning can learn useful representations for image based RL <cit.>, outperforming representations learned using VAEs. Furthermore, <cit.> showed empirically that in partially observed MDPs, representations learned using CPC <cit.> are correlated with the belief. In this work, we further investigate contrastive learning for meta RL, henceforth termed CL meta RL, and aim to establish it as a principled and advantageous alternative to the variational approach.
Our first contribution is a proof that, given certain assumptions on data collection and the optimization process of CPC, representations learned using a variant of CPC are indeed a sufficient statistic for control, and therefore suffice as input for a Bayes-optimal policy. Our second contribution is a bound on the suboptimality of a policy that uses an approximate sufficient statistic, learned by CPC, in an iterative policy improvement scheme where policies between iterations are constrained to be similar. This result relaxes the assumptions on the optimization and data collection in the first proof. Building on this result, we propose a simple meta RL algorithm that uses a CPC based representation to learn a sufficient statistic.
Our third contribution is an empirical evaluation of our method that exposes several advantages of the contrastive learning approach. In particular, we show that:
[label=(*)]
* For state-based observations, CL meta RL is on par with the state-of-the-art VariBAD <cit.>
* For image-based observations, CL meta RL significantly outperforms the variational approach, and is competitive with RNN based methods <cit.>
* In contrast to the variational approach, CL Meta RL is compatible with image augmentations and domain randomization.
* Our method works well in the online and offline meta RL setting.
Overall, our results establish CL meta RL as a versatile and competitive approach to meta RL.
§ BACKGROUND AND PROBLEM FORMULATION
In this section we present our problem formulation and relevant background material.
§.§ Meta RL and POMDPs
We define a Markov Decision Process (MDP) <cit.> as a tuple ℳ=(𝒮,𝒜,𝒫, ), where 𝒮 is the state space, 𝒜 is the action space, 𝒫 is the transition kernel and is the reward function. In meta RL, we assume a distribution over tasks, where each task is an MDP ℳ_i=(𝒮,𝒜,𝒫_i,_i), where the state and action spaces are shared across tasks, and 𝒫_i,_i are task specific and drawn from a task distribution, which we denote 𝒟(𝒫,). At a given time t, we denote by (s_0, a_0,r_0, s_1,a_1,r_1,…,s_t)=h_t∈_t the current history, where _t is the space of all state-action-reward histories until time t. Our aim in meta RL is to find a policy π = {π_0, π_1,…}, where π_t:_t →𝒜, which maximizes the following objective:
𝔼_π[∑_t=0^∞γ^t r_t],
where the expectation 𝔼_π is taken over the transitions s_t+1∼𝒫(·|s_t,a_t), the reward r_t = (s_t, a_t), the actions a_t ∼π(·|h_t) and the uncertainty over the MDP parameters 𝒫,∼𝒟(𝒫,). We assume a bounded reward r_t ∈ [-R_max, R_max], R_max > 0 with probability one.
Meta RL is a special case of the more general Partially Observed Markov-Decision Process (POMDP), which is an extension of MDPs to partially observed states.
In the POMDP for meta RL, the unobserved variables are 𝒫,, and they do not change over time. We define _t for POMDPs as above, except that states are replaced by observations according to the distribution o_t+1∼ U(o_t+1|s_t+1,a_t). As shown in <cit.>, the optimal policy for a POMDP can be calculated using backwards dynamic programming for every possible h_t ∈_t. However, as explained in <cit.> this method is computationally intractable in most cases as _t grows exponentially with t.
§.§ Information States and BAMDPs
Instead of the intractable space of histories, sufficient statistics can succinctly summarize all the necessary information for optimal control. One popular sufficient statistic is the posterior state distribution or belief P(s_t|h_t). Conditions for a function to be a sufficient statistic, also termed information state, were presented by <cit.> and are reiterated here for completeness:
Let {_t}_t=1^T be a pre-specified collection of Banach spaces. A collection {: _t →_t }_t=1^T of history compression functions is called an information generator if the process {_t }_t=1^T satisfies the following properties, where h_t ∈_t, and (h_t)=_t ∈𝒵_t:
P1 For any time t and for any h_t ∈_t, a_t ∈ we have:
𝔼[r_t | h_t, a_t] = 𝔼[r_t |_t=(h_t), a_t ].
P2 For any time t, and for any h_t ∈_t, a_t ∈, and any Borel subset B of Z_t+1 we have:
P( B ∈_t+1 | h_t,a_t ) = P ( B ∈_t+1 | _t=(h_t),a_t ).
Intuitively, information states compress the history without losing predictive power about the next reward, or the next information state.
To solve a POMDP, one can define a Bayes-Adaptive MDP (BAMDP)– an MDP over the augmented state space of 𝒮×ℬ, where ℬ={𝒵_t }_t=1^T is the space of the information state. This idea was introduced by <cit.> for the belief. Here, we use the term BAMDP more generally, referring to any information state. The optimal policies for BAMDPs are termed Bayes-optimal and optimally trade-off between exploration and exploitation, which is essential for maximizing online return during learning. Unfortunately, in most cases computing the Bayes-optimal policy is intractable because the augmented space is continuous and high-dimensional. <cit.> proposed to approximate the Bayes-optimal policy by using deep neural networks to learn an information state (belief), and conditioning an RL agent on the learned augmented space; here we follow this approach.
§ RELATED WORK
Our focus in this work is learning a Bayes-optimal policy for meta RL. We recapitulate the current approaches to meta RL with a focus on approaches that potentially yield Bayes-optimal policies.
The methods in <cit.> learn neural network policies that can quickly be fine-tuned to new tasks at test time via gradient updates. These methods do not optimize for Bayes-optimal behavior, and typically exhibit significantly suboptimal test-time adaptation.
A different approach is to learn an agent that directly infers the task at test time, and conditions the policy based on the inferred task. Typically, past interactions of the agent with the environment are aggregated to a latent representation of the task.
<cit.> follow a posterior-sampling approach, which is not Bayes-optimal <cit.>; in this work we focus on methods that can achieve Bayes-optimality.
<cit.> propose memory-based approaches, which <cit.> proves to approximate Bayes-optimal agents. <cit.> also approximate Bayes-optimal agents with a history-based representation, using a variational approach.
<cit.> learn an approximately Bayes-optimal agent, where privileged information – a task descriptor – is used to learn a sufficient statistic.
We explore an alternative approach that lies at the intersection of meta RL and contrastive learning. Different from memory-based methods such as RL^2 <cit.>, and similarly to VariBAD, we learn a history based embedding separately from the policy.
However, unlike variational methods, we learn the task representation using contrastive learning.
Contrastive learning has been used to learn representations for input to a meta RL policy. FOCAL <cit.> uses distance metric learning to learn a deterministic encoder of transition tuples to perform offline RL. They operate under the relatively restrictive assumption that each transition tuple (s, a, s', r) is uniquely identified by a task. The authors followed up with FOCAL++, in which batches of transition tuples (not necessarily from the same trajectory) are encoded to a representation that is optimized with MoCo <cit.>, a variant of CPC, alongside an intra-task attention mechanism meant to robustify task inference <cit.>. The MBML method in <cit.> proposes an offline meta RL method that uses the triplet loss to learn embeddings of batches of transition tuples from the same task, with the same probabilistic and permutation-invariant architecture of <cit.>. <cit.> propose embedding windows of transition tuples as probabilistic latent variables, where the windows are cropped from different trajectories. The embeddings are learned with MoCO <cit.> by contrasting them in probabilistic metric space, where positive pairs are transition windows that come from the same batch. The algorithm is presented as a general method to learn representations for context-based meta RL algorithms, but in practice all results are shown with PEARL <cit.>. In a similar line of work, <cit.> encode batches of transitions as a product of Gaussian factors and contrast the embeddings with MoCO <cit.>, with positive pairs being embedded transition batches from the same task, as opposed to the same trajectory as in <cit.>. As in <cit.>, results are shown with a posterior sampling meta RL algorithm. While we also investigate contrastive learning for meta RL, we make an important distinction: all of the works above embed transition tuples and not histories, and therefore cannot represent information states, and cannot obtain Bayes-optimal behavior. In contrast, in our work, we draw inspiration from <cit.>, who used a glass-box approach to empirically show that contrastive learning can be used to learn the belief in a POMDP. We cast this idea in the Bayesian-RL formalism, and show both theoretically and empirically, that contrastive learning can be used to learn Bayes-optimal meta RL policies.
§ METHOD
In this section we show how to use contrastive learning to learn an information state representation of the history, and use it as input to an RL agent. We give a brief description of CPC <cit.> followed by our meta RL algorithm. We then prove that our method does indeed learn an information state.
§.§ Contrastive Predictive Coding
CPC <cit.> is a contrastive learning method that uses noise contrastive estimation
<cit.> to discriminate between positive future observations o_t+k^+, where t is the current time step, and negative observations o_t+k^-. First, an encoder g generates an embedding for each observation in a sequence of observations from a trajectory τ until time t, {z_i=g(o_i)}_i=1^t. Second, an autoregressive model g_AR summarizes z_≤ t, the past t observations in latent space, and outputs a latent c_t. The model is trained to discriminate between future observations o_t+k^+ and K negative observations {o_t+k^-,i}_i=1^K given c_t. Given a set X={o_t+k^+,o_t+k^-,1,…,o_t+k^-,K} containing one positive future observation sampled according to P(o_t+k^+|c_t) and K negative observations sampled from a proposal distribution P(o_t+k^-), the InfoNCE loss is:
ℒ_InfoNCE=
-𝔼_X[logexp(f(c_t,o_t+k^+))/exp(f(c_t,o_t+k^+)) + ∑_i=1^Kexp(f(c_t,o_t+k^-,i))],
where f is a learnable function that outputs a similarity score. The model components f, g, g_AR are learned by optimizing the loss ℒ_InfoNCE.
§.§ ContraBAR Algorithm
We will now introduce our CPC based meta RL algorithm, which is depicted in <ref>, and explain how CPC is used to learn a latent representation of the history.
We begin by noting that we use the term observation throughout the text, in line with <cit.>, however in our case the meaning is state, reward and action o_t = {s_t,r_t-1,a_t-1} when talking about “observation history", and state and reward o_t+k={s_t+k,r_t+k-1} when talking about “future observations". We would like to learn an embedding of the observation history, c_t, that will contain relevant information for decision making. The CPC formulation seems like a natural algorithm to do this – for a given trajectory τ of length T collected from some unknown MDP ℳ, we use an embedding of its observation history until time t < T, c_t, to learn to discriminate between future observations from the trajectory τ and random observations from other trajectories τ_j≠τ. This means that c_t encodes relevant information for predicting the future system states, and consequently information regarding the MDP ℳ from which τ was collected.
The CPC formulation described above is based on predicting future states in an uncontrolled system without rewards. We now modify it to learn a sufficient statistic for meta RL. We assume that data is collected at each training iteration m by some data collection policy π_m and added to a replay buffer 𝒟={τ_i}_i=1^N containing trajectories from previous data collection policies {π_1,…,π_m-1}; we note that length of the trajectories may vary. At each learning iteration, a batch of M trajectories is sampled, and for each trajectory and time t the negative observations are sampled from the remaining M-1 trajectories in the batch. As in CPC, we define c_t to be a function of the observation history until time t, but we add to f as input the future k-1 actions, as in a controlled system the future observation o_t+k=(s_t+k,r_t+k-1) depends on the controls. Our f can therefore now be written as
f(c_t, o_t+k,a_t:t+k-1). We implement this modification as in <cit.> by means of an additional autoregressive component, a GRU g_action that receives actions as input and takes c_t as its initial hidden state.
Given the adjustments described above, each batch B used as input to our algorithm contains the following:
[label=(*)]
* The observation history until time t in some trajectory τ
* Future observations from time t+k
* Observations o_t+k^- from the remaining M-1 trajectories sampled from 𝒟
.
We rewrite the InfoNCE loss for the meta RL setting explicitly; For ease of notation we mark f(c_t, o_t+k^+,a_t:t+k-1) as f^+ and f(c_t, o_t+k^-,i,a_t:t+k-1) as f^-,i:
ℒ_M=
𝔼_B[logexp (f^+)/exp(f^+) + ∑_i=1^Kexp(f^-,i)],
where the expectation is over the batches of positive and negative observations sampled from 𝒟, as described above.
§.§ Learning Information States with CPC
We now show that integrating contrastive learning with meta RL is a fundamentally sound idea. We shall prove that our algorithm presented in <ref> learns a representation of the history that is an information state,
by showing that the latent encoding satisfies the properties of an information state P1, P2 as defined by <cit.> and reiterated above in <ref>.
We first define the notion of a “possible” history, which we use in Assumption <ref>.
Let P_M denote a probability distribution over MDPs and let P_m,π(h_t) be the probability of observing history h_t under policy π in MDP m. We say that h_t is a possible history if there exists a policy π and an MDP m such that P_M(m) > 0 and P_m,π(h_t) > 0.
Next, we make the following assumption, which states that the policy collecting the data covers the state, reward and action space.
Let the length of the longest possible history be T. Let h_t, where t ≤ T, be a history and let P_𝒟(h_t) denote the probability of observing a history in the data 𝒟. If h_t is a possible history, then P_𝒟(h_t)>0.
Assumption <ref> is necessary to claim that the learned CPC representation is a sufficient statistic for every possible history. In Section <ref> we discuss a relaxation of this assumption, using approximate information states.
Let Assumption <ref> hold. Let g^*,g^*_AR,f^* jointly minimize ℒ_M(g,g_AR,f). Then the context latent representation
c_t=g^*_AR(z_≤ t) satisfies conditions P1, P2 and is therefore an information state.
The full proof is provided in <ref>; we next provide a sketch. The main challenge in our proof lies in proving the following equality:
P(s_t+1,r_t|h_t,a_t) = P(s_t+1,r_t|c_t,a_t).
Given the equality in <ref>, proving P1,P2 is relatively straightforward. We prove <ref> by expanding the proof in <cit.>, which shows that the InfoNCE loss upper bounds the negative mutual information between o_t+k^+ and c_t (in the CPC setting). In our case, we show that
ℒ_M≥log(M-1) - I(s_t+1,r_t;c_t|a_t),
where I(·; ·) denotes mutual information. Thus, by minimizing the loss in <ref>, we maximize the mutual information I(s_t+1,r_t;c_t|a_t). Due to the Markov property of the process, the mutual information in (<ref>) cannot be greater than I(s_t+1,r_t;h_t|a_t), which leads to Equation <ref>.
§.§ Learning Approximate Information States with CPC
We next investigate a more practical setting, where there may be errors in the CPC learning, and the data does not necessarily satisfy Assumption <ref>. We aim to relate the CPC error to a bound on the suboptimality of the resulting policy.
In this section, we consider an iterative policy improvement algorithm with a similarity constraint on consecutive policies, similar to the PPO algorithm we use in practice <cit.>. We shall bound the suboptimality of policy improvement, when data for training CPC is collected using the previous policy, denoted π_k.
In light of Eq. <ref>, we assume the following error due to an imperfect CPC representation:
There exists an ϵ such that for every t≤ T,
I(s_t+1,r_t;c_t|a_t) ≥ I(s_t+1,r_t;h_t|a_t) - ϵ, where the histories are distributed according to policy π_k.
The next theorem provides our main result.
Let Assumption <ref> hold for some representation c_t. Consider the distance function between two distributions D(P_1(x),P_2(x)) = max_x | P_1(x)/P_2(x) |.
We let r̂(c_t,a_t)=𝔼[r_t|c_t,a_t] and P̂(c'|c_t,a_t)=𝔼[1(c_t+1=c')|c_t,a_t] denote an approximate reward and transition kernel, respectively. Define the value functions
Q̂_t(c_t, a_t) = r̂(c_t, a_t) + ∑_c_t+1P̂(c_t+1|c_t,a_t) V̂_t+1(c_t+1)
V̂_t(c_t) = max_π: D(π(c_t), π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a),
2.2em
for t≤ T, and V̂_T(c_T) = 0, and
the approximate optimal policy
π̂(c_t)∈_π: D(π, π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a).
Let the optimal policy π^*(h_t) be defined similarly, but with h_t replacing c_t in (<ref>) and (<ref>).
Then we have that
𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤
ϵ^1/3 R_max T^2 (√(2) + 4β^T).
The dynamic programming recurrence in Equation (<ref>) defines the optimal policy that is conditioned on c_t (and not h_t), and is restricted to be β-similar to the previous policy π_k. The theorem bounds the loss in performance of such a policy compared to a policy that is conditioned on the full history (yet still restricted to be β-similar to π_k). The proof of Theorem <ref> builds on the idea of an approximate information state <cit.> and is detailed in Appendix <ref>.
§.§ ContraBAR Architecture
We now describe several design choices in our ContraBAR implementation.
History Embedding
We now describe the specific architecture used to implement our algorithm, also depicted in <ref>. We use a non-linear encoder to embed a history of actions, rewards and states and run it through a GRU to generate the hidden state for the current time-step c_t. The latent c_t is then used to initialize the action-gru g_action, which is fed future actions as input – the resulting hidden state is then concatenated with either a positive observation-reward pair, or a negative one and used as input to a projection head that outputs a score used in <ref>.
We note that given a random sampling of negative observations, the probability of sampling a positive and negative observation that share the same state is low. Consequently, for environments where s_t+k can be estimated via s_t,a_t,…, a_t+k without h_t, c_t need only encode information regarding s_t to allow the action-gru to learn to distinguish between positive and negative observations. This renders c_t uninformative about the reward and transition functions and thus unhelpful for optimal control. An example of this is a set of deterministic environments that differ only in reward functions. The action-gru can learn to predict s_t+k via s_t,a_t,…, a_t+k, only requiring c_t to encode information regarding s_t and not the reward function. One way to circumvent this is hard negative mining, i.e using negative samples that are difficult to distinguish from the positive ones. Another solution, relevant for the case of varying reward functions, is to generate a negative observation by taking the state and action from the positive observation and recalculating the reward with a reward function sampled from the prior. In practice, we found that a simple alternative is to omit the action-gru. This prevents the easy estimation of s_t+k and requires c_t to encode information regarding the reward and transition function. We found this worked well in practice for the environments we ran experiments on, including those with varying transitions. We expand on these considerations in Appendix <ref>.
RL Policy
The history embedding portion of the algorithm described above is learned separately from the policy and can be done online or offline. The policy, which can be trained with an RL algorithm of the user's choice, is now conditioned on the current state s_t as well as c_t – the learned embedding of h_t. We chose to use PPO <cit.> for the online experiments and SAC <cit.> for the offline experiment – in line with VariBAD and BOReL <cit.>.
§ EXPERIMENTS
In our experiments, we shall demonstrate that
[label=(*)]
* ContraBAR learns approximately Bayes-optimal policies
* ContraBAR is on par with SOTA for environments with state inputs
* ContraBAR scales to image-based environments
* Augmentations can be naturally incorporated into ContraBAR and
* ContraBAR can work in the offline setting
.
We compare ContraBAR to state-of-the-art approximately Bayes-optimal meta RL methods. In the online setting, we compare against VariBAD <cit.>, RL^2 <cit.>, and the recent modification of RL^2 by <cit.> which we refer to as RMF (recurrent model-free).
In the offline setting, we compare with BOReL <cit.>.
<cit.> and <cit.> already outperform posterior sampling based methods such as PEARL <cit.>, therefore we do not include such methods in our comparison.
Finally, we note that using VariBAD <cit.> with image-based inputs is currently computationally infeasible due to memory constraints, and as such we did not use it as a baseline – we explain this issue further in <ref>. Other variational approaches, which require a reconstruction of the future observations, are subject to similar memory constraints. Instead, we compared our algorithm against RL^2 <cit.>, which works with images. We evaluate performance similarly to <cit.>, by evaluating per episode return for 5 consecutive episodes with the exception of the offline setting where we adapted our evaluation to that of BOReL.
§.§ Qualitative Near Bayes-Optimal Behavior
We begin with a qualitative demonstration that ContraBAR can learn near Bayes-optimal policies. As calculating the exact Bayes-optimal policy is mostly intractable, we adopt the approach of <cit.>: for deterministic domains with a single sparse reward, the Bayes-optimal solution is essentially to search all possible reward locations so as to maximally reduce uncertainty, and then go directly to the goal in subsequent episodes. Thus, we can identify whether a policy is approximately Bayes-optimal by inspecting its trajectory.
Figure <ref> displays rollouts from a trained policy in the Gridworld and Semi-Circle domains, demonstrating near Bayes-optimal behavior similar to VariBAD <cit.>.
§.§ Results for Problems with State Observations
We compare ContraBAR with VariBAD and RMF, the current state-of-the-art on MuJoCo locomotion tasks <cit.>, commonly used in meta RL literature. We use the environments considered in <cit.>, namely the Ant-Dir, AntGoal, HalfCheetahDir, HalfCheetahVel, Humanoid and Walker environments. <ref> shows competitive performance with the current SOTA on all domains. Note that rewards in
these environments are dense, so in principle, the agent only needs a few exploratory
actions to infer the task by observing the rewards it receives. Indeed, we see that ContraBAR is able to quickly adapt within the first episode, with similar performance in subsequent episodes.
§.§ Scaling Belief to Image-Based Inputs
We show that ContraBAR can scale to image domains, which are computationally expensive, by running our algorithm on three image-based domains with varying levels of difficulty and sources of uncertainty: [label=(*)]
* Reacher-Image – a two-link
robot reaching an unseen target located somewhere on the diagonal of a rectangle, with sparse rewards
* Panda Reacher – a Franka Panda robot tasked with placing the end effector at a goal on a 2d semi-circle, where the vertical position of the goal (z coordinate) is fixed; adapted from the Reacher task in Panda Gym <cit.>
* Panda Wind – The same environment as Panda Reacher, except that the transitions are perturbed with Gaussian noise sampled separately for each task.
For a more detailed description of each environment see <ref>.
Image-Based Reward: For our image-based experiments, we found that learning in image-based domains with sparse reward was difficult when the reward was embedded separately (as in the state observation domains), and concatenated with the image embedding. We hypothesized that this might be an issue of differing scales between the scalar rewards and image inputs, but we observed that standard normalization techniques such as layer norm <cit.> did not help. Instead, we opted for a different approach that embeds the reward as an explicit part of the image. To implement this idea, we exploited the fact that in all our domains, the reward is sparse and binary, and we add a colored strip to a fixed place in the image when non-zero reward is received. Extending this idea to non-binary reward is possible, for example, by controlling the color of the strip.
Our results are displayed in <ref>. For the Reacher environment, ContraBAR is slightly outperformed by RL^2, whereas in Panda Reacher and Panda Reacher Wind ContraBAR outperform RL^2 by a large margin. Notice that in contrast to the dense reward domains of Section <ref>, in these sparse reward tasks the agent gains by exploring for the goal in the first iteration. Evidently, the plots show significantly higher reward in the second episode onward.
Glass-box Approach
To further validate that our algorithm learns a sound belief representation, we follow a glass-box approach similar to that of <cit.>.
First, we used ContraBAR to learn an information state for the Panda Reacher environment. Second, we use the trained agent to create a dataset of trajectories, including the agent's belief at each time step of every trajectory. We then trained an MLP-based binary classifier, which takes (x,y) and the information state c_t as input and predicts whether the goal in the trajectory is indeed (x,y). In <ref> we see the visualization of the classifier's prediction at different points along the trajectory; We see that the predictions coincide with the belief we expect the agent to hold at each step, thus validating the soundness of our belief representation.
§.§ ContraBAR with Domain Randomization
Despite the high fidelity of modern simulators, when deployed in the real-world, image-based algorithms learned in simulation can only be accurate
up to the differences between simulation and reality – the
sim-to-real gap. This motivates us to learn a belief representation that is robust to such differences, and in the following we will show that our algorithm can indeed learn such an information state. Robustification to irrelevant
visual properties via random modifications is termed domain randomization <cit.>. We employ domain randomization in a similar fashion to <cit.> wherein we modify the past and future observations (without the rewards) in the trajectories with a mapping 𝒯: 𝒮→𝒮 that randomly shifts the RGB channels of the images. These modified trajectories are used to learn the history embedding c_t, with the hope that it will be invariant to different color schemes in the environment. We show the strength of such modifications by training two agents with ContraBAR on the Panda Reacher environment – one receives images modified by 𝒯 and the other does not. We then evaluate each agent's performance on different color schemes, which are kept static for evaluation. The results as well as the environments can be seen in <ref>. Note that while the belief may be robustified separately with augmentations, the policy must be robust to such changes as well. To do so, we used the data-regularized actor-critic method from <cit.> where the policy π_θ and value function V_ϕ are regularized via two additional loss terms,
G_π =KL [π_θ(a |s) |π_θ(a | T(s)) ],
G_V =(V_ϕ(s)-V_ϕ(T(s)))^2,
where T: 𝒮→𝒮 randomly modifies the image.
We emphasize that domain randomization, as applied here, is not naturally compatible with variational belief inference methods. The reason is that when the loss targets reconstruction of the modified observation, the learned embedding cannot be trained to be invariant to the modification 𝒯.
§.§ Offline ContraBAR
We show that as in VariBAD <cit.>, the disentanglement of belief and control allows us to reframe the algorithm within the context of offline meta RL, as was done in <cit.>. First, we use ContraBAR to learn a history embedding c_t from an offline dataset. Note that no specific change is required to our algorithm – we simply treat the offline dataset as the replay buffer for ContraBAR. Second, we perform state relabeling as described in <cit.>: for each trajectory τ_i of length T, i.e (s_0^i,a_o^i,r_0^i,…,s_T^i), we embed each partial t-length history h_t as c_t, and transform each s_t^i to s_t^+,i=(s_t^i,c_t^i) as in the BAMDP formulation. We then learn a policy with SAC <cit.> on the transformed dataset. We show competitive results with BOReL <cit.> in <ref>. Unfortunately we were not able to find an offline adaptation of RMF to use as an additional baseline.
§ CONCLUSIONS
We proved that ContraBAR learns a representation that is a sufficient static of the history. Following on this, we presented what is to the best of our knowledge the first approximately Bayes-optimal CL meta RL algorithm. We demonstrated results competitive with previous approaches on several challenging state-input domains. Furthermore, by using contrastive learning we were able to scale meta-RL to image-based domains; We displayed results on par with RL^2 which was also able to scale to image inputs. Finally, we showed that our method is naturally amenable to domain randomization, which may be important for applications such as robotics.
§ ACKNOWLEDGEMENTS
We thank Tom Jurgenson, Ev Zisselman, Orr Krupnik and Gal Avineri for useful discussions and feedback, and Luisa Zintgraf for invaluable help with reproducing the graphs from the VariBAD paper. This work received funding from the European Union (ERC, Bayes-RL, Project Number 101041250). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
icml2023
§ THEOREM PROOFS
<ref>
Let Assumption <ref> hold. Let g^*,g^*_AR,f^* jointly minimize ℒ_M(g,g_AR,f). Then the context latent representation
c_t=g^*_AR(z_≤ t) satisfies conditions P1, P2 and is therefore an information state.
theorem-1
We begin our proof by presenting the causal model for the variant of CPC used by ContraBAR, shown in <ref>.
From the causal model, we can infer that
P(c_t+1|c_t,s_t+1,r_t,a_t) =
P(c_t+1|c_t,s_t+1,r_t,a_t, h_t)
and P(s_t+1,r_t|h_t, a_t) = P(s_t+1,r_t|h_t, a_t, c_t). We shall also assume that c_t is a deterministic function of h_t, and therefore P(c_t+1|c_t,s_t+1,r_t,a_t, h_t) = P(c_t+1|s_t+1,r_t,a_t, h_t), and from the above, we have P(c_t+1|s_t+1,r_t, a_t, h_t) = P(c_t+1|s_t+1,r_t,a_t, c_t).
We now prove a mutual information bound similar to that of <cit.>, we show that by optimizing the meta RL InfoNCE loss defined in <ref> we maximize the mutual information between c_t and s_t+1,r_t given a_t.
We begin with a lemma similar to that of Section 2.3 in <cit.>:
Let c_t be a function of h_t, i.e c_t=σ_t(h_t). s_t+1,r_t,c_t,a_t is a possible sufficient statistic transition if h_t, a_t, r_t, s_t+1 is a possible history as in Definition <ref> and c_t=σ_t(h_t).
Let Assumption <ref> and the loss in <ref> be jointly minimized by f, g, g_AR, then for any possible sufficient statistic transition s_t+1,r_t,c_t,a_t as in Definition <ref>, where c_t=g_AR(h_t), we have that
f(s_t+1,r_t,c_t,a_t) ∝P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t)
.
The loss in Eq. <ref> is the categorical cross-entropy of classifying the positive example correctly, with f/∑_B f being the prediction of the model. We denote the j-th example in the batch B as s_j,r_j, where the subscript does not refer to time here. As in <cit.>, the optimal probability for this loss is P(d=i|B,c_t,a_t) (with [d=i] indicating the i-th example in B is the positive example) and can be derived as follows:
P(d=i|B,c_t,a_t) =P(s_i,r_i|c_t,a_t) Π_l≠ i P(s_l,r_l|a_t)/∑_j=1^MP(s_j,r_j|c_t,a_t)Π_l≠ j P(s_l,r_l|a_t)
=P(s_i,r_i|c_t,a_t)/P(s_i,r_i|a_t)/∑_j=1^MP(s_j,r_j|c_t,a_t)/P(s_j,r_j|a_t).
Eq. <ref> means that for any s_t+1,r_t,c_t,a_t that are part of a batch B in the data, we have that f(s_t+1,r_t,c_t,a_t) ∝P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t). From Assumption <ref>, for any sufficient statistic transition tuple s_t+1,r_t,c_t,a_t there exists a batch it is a part of.
Let Assumption <ref>, and let the loss in <ref> be jointly minimized by f,g,g_AR. Then
I(s_t+1,r_t;c_t|a_t) ≥log(M-1) - ℒ_opt.
Given the optimal value shown in Lemma <ref> for f(s_t+1, r_t, c_t, a_t), by inserting back into the loss we get:
ℒ_opt=-𝔼log[P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t) + ∑_(s',r') ∈{o_j^-}_j=1^M-1P(s',r'|c_t,a_t)/P(s',r'|a_t)]
=𝔼log [1 +
(P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)∑_(s',r') ∈{o_j^-}_j=1^M-1P(s',r'|c_t,a_t)/P(s',r'|a_t) ]
≈𝔼log [1 +
(P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)(M-1) ·𝔼_𝒟(s',r'|a_t)P(s',r'|c_t,a_t)/P(s',r'|a_t) ]
=𝔼log [1 +
(P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)(M-1) ]
≥𝔼log [
P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)(M-1) ]
=-I(s_t+1,r_t;c_t|a_t)+log(M-1).
We therefore get that
I(s_t+1,r_t;c_t|a_t)≥log(M-1) - ℒ_opt.
We conclude that the objective maximizes the mutual information between c_t and s_t+1,r_t given a_t.
Let Assumption <ref>, and let the loss in <ref> be jointly minimized by f,g,g_AR, then I(c_t; s_t+1,r_t|a_t) = I(h_t; s_t+1,r_t|a_t) where I(·; ·) denotes mutual information.
Since s_t+1,r_t depend only on h_t (conditioned on a_t), and since c_t is a deterministic function of h_t, I(s_t+1,r_t;c_t|a_t) cannot be greater than I(s_t+1,r_t;h_t|a_t). From Lemma <ref> , we therefore have that
I(c_t; s_t+1,r_t|a_t) = I(h_t; s_t+1,r_t|a_t)
.
Note that Corollary <ref> states that given the causal model above, c_t is maximally informative about s_t+1,r_t (conditioned on a_t).
We use this result to prove a short lemma that will help us prove that that c_t is an information state.
Let the assumptions of Corollary <ref> hold, then for every a, P(s_t+1,r_t|h_t,a_t) = P(s_t+1,r_t|c_t,a_t).
We start with a result similar to the data processing inequality.
Consider I(s_t+1,r_t;h_t, c_t|a_t). We have that
I(s_t+1,r_t;h_t, c_t|a_t) =
I(s_t+1,r_t;c_t| h_t,a_t)
+ I(s_t+1,r_t ;h_t|a_t),
and on the other hand,
I(s_t+1,r_t;h_t, c_t|a_t) =
I(s_t+1,r_t;h_t| c_t,a_t) + I(s_t+1,r_t;c_t|a_t).
From the causal graph above, we have that I(s_t+1,r_t;c_t| h_t,a_t) = 0. Therefore, from Eq. (<ref>) and (<ref>) we have
I(s_t+1,r_t;h_t|a_t) = I(s_t+1,r_t;c_t|a_t) + I(s_t+1,r_t;h_t| c_t,a_t)
≥ I(s_t+1,r_t;c_t|a_t)
with equality only if I(s_t+1,r_t;h_t| c_t,a_t)=0, since the mutual information is positive. From Corollary <ref>, we therefore must have I(s_t+1,r_t;h_t| c_t,a_t)=0. This implies that s_t+1,r_t and h_t are independent conditioned on c_t,a_t <cit.>, and therefore
P(s_t+1,r_t|h_t,a_t) = P(s_t+1,r_t|c_t,a_t)
.
Let Assumption <ref>, and let the loss in <ref> be jointly minimized by f,g,g_AR, then c_t satisfies P1, i.e., 𝔼[r_t|h_t,a_t] = 𝔼[r_t|c_t,a_t].
𝔼[r_t|h_t,a_t] = ∫ r_t ∫ P(s_t+1, r_t|h_t,a_t) ds_t+1 dr_t
= ∫ r_t ∫ P(s_t+1,r_t|c_t,a_t) ds_t+1 dr_t
= ∫ r_t P(r_t | c_t, a_t) dr_t
= 𝔼[r_t|c_t, a_t].
Let Assumption <ref> and let the loss in <ref> be jointly minimized by f,g,g_AR, then c_t satisfies P2, i.e., P(c_t+1|h_t) = P(c_t+1|c_t).
P(c_t+1|h_t,a_t)
=∫∫ P(s_t+1,r_t|h_t,a_t)P(c_t+1|h_t, s_t+1,r_t,a_t)ds_t+1 dr_t
= ∫∫ P(s_t+1,r_t|c_t,a_t)P(c_t+1|h_t, c_t, s_t+1,r_t,a_t)ds_t+1dr_t
= ∫∫ P(s_t+1,r_t|c_t,a_t)P(c_t+1|c_t, s_t+1,r_t,a_t)ds_t+1dr_t
= P(c_t+1|c_t,a_t).
where the second equality is due to lemma <ref> and the penultimate equality is due to c_t+1 being a deterministic function of c_t,s_t+1,r_t and a_t
We now provide the proofs for the setting described in Section <ref>, where there may be errors in the CPC learning, and the data does not necessarily satisfy Assumption <ref>.
We recapitulate that we consider an iterative policy improvement algorithm with a similarity constraint on consecutive policies, similar to the PPO algorithm we use in practice <cit.>. We shall bound the suboptimality of policy improvement, when data for training CPC is collected using the previous policy, denoted π_k. We will show optimal policy bounds when the information state is approximate, similar in spirit to <cit.>, but with additional technicalities. Under the setting above, we will bound the suboptimality in policy improvement in terms of an error in CPC training, which we denote ϵ.
In light of the bound from <ref>, we assume the following:
<ref>
There exists an ϵ such that for every t≤ T,
I(s_t+1,r_t;c_t|a_t) ≥ I(s_t+1,r_t;h_t|a_t) - ϵ, where the histories are distributed according to policy π_k.
theorem-1
We now define P_π(h_t) as the probability of seeing a history under a policy π. For the sake of simplicity, for the subsequent section we will refer to P_π(h_t) as P(h_t). Furthermore, when the information state is approximate, we denote the information state generator σ̂_t.
We begin with the following bound.
Let Assumption <ref> hold, then
𝔼_h_t ∼ P(h_t) [D_KL(P(s_t+1,r_t|h_t,a_t) || P(s_t+1,r_t|σ̂_t(h_t),a_t) ]≤ϵ
Let Assumption <ref> hold, then I(s_t+1,r_t;h_t|c_t,a_t) ≤ϵ
We start with a result similar to the data processing inequality.
We have that
I(s_t+1,r_t;h_t,c_t|a_t)=
I(s_t+1,r_t;c_t|h_t,a_t) + I(s_t+1,r_t;h_t|a_t)
and,
I(s_t+1,r_t;h_t,c_t|a_t)=
I(s_t+1,r_t;h_t|c_t,a_t) + I(s_t+1,r_t;c_t|a_t)
From the causal graph we have that I(s_t+1,r_t;c_t|h_t,a_t)=0, yielding
I(s_t+1,r_t;h_t|a_t) = I(s_t+1,r_t;h_t|c_t,a_t) + I(s_t+1,r_t;c_t|a_t) ⇒
I(s_t+1,r_t;h_t|a_t) - I(s_t+1,r_t;c_t|a_t) = I(s_t+1,r_t;h_t|c_t,a_t)
Combined with <ref> we get that I(s_t+1,r_t;h_t|c_t,a_t) ≤ϵ
We note that from here on out everything is conditioned on a_t, and omit it to avoid overly cumbersome notation.
For ease of notation we define:
z=s_t+1,r_t.
We note that given a specific h_t, we have:
D_KL (P_z|h_t || P_z|σ̂_t(h_t) )=∫_z P(z|h_t) ·log ( P(z|h_t)/P(z|c_t) )
I(z;h_t | c_t) = 𝔼_h_t ∼ P(h_t) [D_KL (P_z|h_t || P_z|σ̂_t(h_t) ) ]
I(z;h_t|c_t) =𝔼_P_σ_t(h_t)=c_t [D_KL ( P_z,h_t|c_t || P_z|c_t· P_h_t | c_t ) ]
=𝔼_P_σ_t(h_t)=c_t [∫_h_t∫_zP(z,h_t|c_t) log ( P(z,h_t|c_t)/P(z|c_t) · P(h_t | c_t) ) ]
=𝔼_P_σ_t(h_t)=c_t [∫_h_t∫_zP(z,h_t|c_t) log ( P(z,h_t,c_t)· P(c_t)/P(z,c_t) · P(h_t, c_t) ) ]
=𝔼_P_σ_t(h_t)=c_t [∫_h_t∫_zP(z,h_t,c_t)/P(c_t) log ( P(z|h_t)/P(z|c_t) ) ]
= ∫_c_t∫_h_t∫_zP(z,h_t,c_t) log ( P(z|h_t)/P(z|c_t) )
= ∫_c_t∫_h_t∫_zP(z|h_t)P(h_t,c_t) log ( P(z|h_t)/P(z|c_t) )
= ∫_c_t∫_h_t P(h_t,c_t)∫_zP(z|h_t) log ( P(z|h_t)/P(z|c_t) )
= ∫_h_t P(h_t)∫_c_tδ_c_t=σ_t(h_t)∫_zP(z|h_t) log ( P(z|h_t)/P(z|c_t) )
=𝔼_h_t ∼ P(h_t) [D_KL(P(s_t+1,r_t|h_t) || P(s_t+1,r_t|σ̂_t(h_t)) ]
We now complete the proof. Combining Proposition <ref> with Proposition <ref> we get that
𝔼_h_t ∼ P(h_t) [D_KL (P_s_t+1,r_t|h_t || P_s_t+1,r_t|σ̂_t(h_t) ) ]≤ϵ
as required.
Let π_k(σ̂_t(h_t)) denote the policy at iteration k, and note that it is defined on the information state. At iteration k+1, we first collect data using π_k. We denote P_π_k(h_t) the probability of observing a history in this data collection process. We then use CPC to learn an approximate information state. Let D(h_t) = D_KL (P_s_t+1,r_t|h_t,a_t || P_s_t+1,r_t|σ̂_t(h_t),a_t ).
Let Assumption <ref> hold, then
∑_h_t P_π_k(h_t) D(h_t) ≤ϵ.
Assumption <ref> holds, therefore the result is an immediate corollary from <ref> for every t∈ 0,1,…,T-1.
For some distance measure D, let Π_β = {π : D(π(h_t), π_k(σ̂_t(h_t))) ≤β ∀ h_t } denote the set of policies that are β-similar to π_k.
We next define the optimal next policy π^*
π^* ∈_π∈Π_β𝔼^π[ ∑_t=0^T-1 r(s_t,a_t) ].
Note that the value of this policy satisfies the following Bellman optimality equations:
Q_t(h_t, a_t) = r(h_t, a_t) + 𝔼[ V_t+1(h_t+1)]
V_t(h_t) = max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q_t(h_t, a),
for t≤ T, and V_T(h_T) = 0.
We now present our main result, where we consider an iterative policy improvement scheme based on the approximate information state of ContraBAR and provide policy improvement bounds.
<ref>
Let Assumption <ref> hold for some representation c_t. Consider the distance function between two distributions D(P_1(x),P_2(x)) = max_x | P_1(x)/P_2(x) |.
We let r̂(c_t,a_t)=𝔼[r_t|c_t,a_t] and P̂(c'|c_t,a_t)=𝔼[1(c_t+1=c')|c_t,a_t] denote an approximate reward and transition kernel, respectively. Define the value functions
Q̂_t(c_t, a_t) = r̂(c_t, a_t) + ∑_c_t+1P̂(c_t+1|c_t,a_t) V̂_t+1(c_t+1)
V̂_t(c_t) = max_π: D(π(c_t), π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a),
<ref>2.2em
for t≤ T, and V̂_T(c_T) = 0, and
the approximate optimal policy
π̂(c_t)∈_π: D(π, π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a).
<ref>
Let the optimal policy π^*(h_t) be defined similarly, but with h_t replacing c_t in (<ref>) and (<ref>).
Then we have that
𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ϵ^1/3 R_max T^2 (√(2) + 4β^T).
theorem-1
Since Assumption <ref> holds, Proposition <ref> does as well.
From the Markov inequality, we have P_π_k(D(h_t) ≥ n ϵ) ≤ϵ/n ϵ = 1/n.
We now the define the “Good Set” H_G = { h_t : D(h_t) < n ϵ} and the “Bad Set” H_B = { h_t : D(h_t) ≥ n ϵ}.
Next, we define an auxiliary policy π̃(h_t) = π^*(h_t), if h_t ∈ H_G
worst behavior, if h_t ∈ H_B.
We will assume that after observing h_t ∈ H_B, the policy performs as bad as possible for the rest of the episode.
Next, we bound the performance of π̃.
We have that 𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̃[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ 2T^2 R_maxβ^T/n .
We will denote by r_t(h_t) the reward at the last state-action pair. That is, for h_t = s_0,a_0,r_0,…,s_t-1,a_t-1,r_t-1,s_t we set r_t(h_t) = r_t-1. We will denote R(h_t) the sum of rewards, that is, R(h_t) = ∑_t'=0^t-1 r_t'.
We also denote by P_π(h_t) the probability of observing history h_t under policy π.
Note that by definition ∑_t=0^T-1∑_h_t P_π(h_t) = 1. Also, note that by the definition of the set Π_β, for any two policies π_1,π_2 ∈Π_β we have P_π_1(h_t)/P_π_2(h_t) ≤β^t.
We now claim that
𝔼^π̃[ ∑_t=0^T-1 r_t ] ≥𝔼^π^*[ ∑_t=0^T-1 r_t ] - 2T^2 R_maxβ^T/n.
We first estimate the probability that policy π̃ encounters a history in H_B. Consider some t∈ 0,…,T-1. We have that under P_π_k, with probability at most 1/n, h_t ∈ H_B. Under P_π̃, with probability at most β^t/n, h_t ∈ H_B. From the union bound, with probability at most Tβ^T/n the policy visits at least one history in H_B.
Let H̅_B denote the set of T-length histories that visit a history in H_B, and let H̅_G be its complement set.
Now, note that
𝔼^π̃[ ∑_t=0^T-1 r_t ] = ∑_h_T P_π̃(h_T) R(h_T)
= ∑_h_T∈H̅_G P_π̃(h_T) R(h_T) + ∑_h_T∈H̅_B P_π̃(h_T) R(h_T)
= ∑_h_T∈H̅_G P_π^*(h_T) R(h_T) + ∑_h_T∈H̅_B P_π̃(h_T) R(h_T)
≥∑_h_T∈H̅_G P_π^*(h_T) R(h_T) + (Tβ^T/n) T (-R_max)
= ∑_h_T P_π^*(h_T) R(h_T) - ∑_h_T∈H̅_B P_π^*(h_T) R(h_T) + (Tβ^T/n) T (-R_max)
≥𝔼^π^*[ ∑_t=0^T-1 r_t ] -2T^2 R_maxβ^T/n
The third equality is from the definition of π^*. The fourth inequality relies on the reward function being bounded, i.e R(h_T) ≥ T(-R_max). This alongside the fact that ∑_h_T ∈H̅_BP_π̃(h_T) ≥ (TB^T/n) gives us the inequality. Note that the last inequality follows from the definition of π_*, wherein the probability of visiting at least one history in H_B is the same for π_* and π̃.
Next, we note that using Pinsker's inequality, we have
d_TV(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t) ≤√(2 d_KL(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t)), and that
|𝔼[r_t|h_t, a_t] - 𝔼[r_t|c_t, a_t]|
≤ R_max d_TV(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t)
|𝔼[V_t+1|h_t, a_t] - 𝔼[V_t+1|c_t, a_t]|
≤ R_max(T-t) d_TV(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t)
We next prove the following result.
We have that
Q̂_t(σ̂_t(h_t), a) ≥ Q^π̃(h_t,a) -α_t,
V̂_t(σ̂_t(h_t)) ≥ V^π̃(h_t) -α_t,
where α_t satisfies the following recursion: α_T = 0, and α_t = √(2 n ϵ)R_max (T-t+1) + α_t+1.
We prove by backward induction. The argument holds for T by definition. Assume that Equation (<ref>) holds at time t+1, and consider time t. If h_t ∈ H_B, then by definition Q̂_t(σ̂_t(h_t), a) ≥ Q^π̃(h_t,a), since π̃ will take the worst possible actions after observing h_t. Otherwise, h_t ∈ H_G and we have
Q^π̃(h_t,a) - Q̂_t(σ̂_t(h_t), a)
= 𝔼[r_t|h_t, a] + 𝔼[V^π̃_t+1(h_t+1)|h_t, a] - r̂(σ̂_t(h_t), a) - ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1)
= 𝔼[r_t|h_t, a] - 𝔼[r_t|c_t,a_t]
+ 𝔼[V^π̃_t+1(h_t+1)|h_t, a] - 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a]
+ 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a] - ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1)
≤ √(2 n ϵ)R_max + α_t+1 + √(2 n ϵ)R_max(T-t).
We note that for h_t ∈ H_G, D(h_t) ≤ nϵ, yielding the d_TV bounds.
For the second part, If h_t ∈ H_B, then by definition V̂_t(σ̂_t(h_t)) ≥ V^π̃(h_t). Otherwise, h_t ∈ H_G and we have
V^π̃_t(h_t) - V̂_t(σ̂_t(h_t)) = max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q^π̃_t(h_t, a) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q̂_t(σ̂_t(h_t), a)
≤max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) (Q̂_t(σ̂_t(h_t), a)+ α_t) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q̂_t(σ̂_t(h_t), a)
=α_t.
We next define another auxiliary policy π̃̂̃(h_t) = π̂(h_t), if h_t ∈ H_G
optimal behavior, if h_t ∈ H_B.
We will assume that after observing h_t ∈ H_B, the policy perform optimally for the rest of the episode.
Therefore, V^π̃̂̃_t(h_t ∈ H_B) = V_t(h_t).
We have the following results, analogous to Propositions <ref> and <ref>.
We have that 𝔼^π̃̂̃[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ 2T^2 R_maxβ^T/n .
Analogous to the proof of Proposition <ref>.
We have that
Q̂_t(σ̂_t(h_t), a) ≤ Q^π̃̂̃(h_t,a) + α_t,
V̂_t(σ̂_t(h_t)) ≤ V^π̃̂̃(h_t) +α_t,
where α_t satisfies the following recursion: α_T = 0, and α_t = √(2 n ϵ)R_max (T-t+1) + α_t+1.
Similarly to the proof of Proposition <ref>. The argument hold for T by definition. Assume that Equation (<ref>) holds at time t+1, and consider time t. If h_t ∈ H_B, then by definition Q̂_t(σ̂_t(h_t), a) ≤ Q^π̃̂̃(h_t,a), since π̃̂̃ will take the best possible actions after observing h_t. Otherwise, h_t ∈ H_G and we have
Q̂_t(σ̂_t(h_t), a) - Q^π̃̂̃(h_t,a)
= r̂(σ̂_t(h_t), a) + ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1) - 𝔼[r_t|h_t, a] - 𝔼[V^π̃̂̃_t+1(h_t+1)|h_t, a]
= 𝔼[r_t|σ̂_t(h_t),a_t] - 𝔼[r_t|h_t, a]
+ 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a] - 𝔼[V^π̃̂̃_t+1(h_t+1)|h_t, a]
+ ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1) - 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a]
≤ √(2 n ϵ)R_max + α_t+1 + √(2 n ϵ)R_max(T-t).
For the second part, If h_t ∈ H_B, then by definition V̂_t(σ̂_t(h_t)) ≤ V^π̃̂̃(h_t). Otherwise, h_t ∈ H_G and we have
V̂_t(σ̂_t(h_t)) - V^π̃̂̃_t(h_t) = max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q̂_t(σ̂_t(h_t), a) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q^π̃̂̃_t(h_t, a)
≤max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) (Q^π̃̂̃_t(h_t, a) + α_t) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q^π̃̂̃_t(h_t, a)
=α_t.
We have
𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ]
≤𝔼^π̃[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] + 2T^2 R_maxβ^T/n
= ∑_h_0 P(h_0) ( V_0^π̃(h_0) - V_0^π̂(h_0)) + 2T^2 R_maxβ^T/n
≤∑_h_0 P(h_0) ( V_0^π̃(h_0) - V̂_0(σ̂_0(h_0)) + V̂_0(σ̂_0(h_0)) - V_0^π̂(h_0)) + 2T^2 R_maxβ^T/n
≤∑_h_0 P(h_0) ( V_0^π̃(h_0) - V̂_0(σ̂_0(h_0)) + V̂_0(σ̂_0(h_0)) - V_0^π̃̂̃(h_0)) + 4T^2 R_maxβ^T/n
≤ 2α_0 + 4T^2 R_maxβ^T/n
We note that the first inequality follows from Proposition <ref>. The second equality stems from the fact that P(h_0)=P(s_0), which is not affected by the choice of policy. The fourth transition follows from the addition and subtraction of V_0^π̃̂̃(h_0) and the use of Proposition <ref>. The final inequality follows from Propositions <ref> and <ref>.
Let us bound α_0. By the recursion α_t = √(2 n ϵ)R_max (T-t+1) + α_t+1 we have that α_0 = T^2/2√(2 n ϵ)R_max. Setting n = ϵ^-1/3 we obtain the desired result:
𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ T^2√(2)ϵ^1/3 R_max + 4T^2 R_maxβ^T ϵ^1/3
= ϵ^1/3 R_max T^2 (√(2) + 4β^T).
§ ENVIRONMENTS
Reacher Image:
In this environment, a two-link planar robot needs to reach an unknown goal as in <cit.>, except that the goal is randomly chosen along a horizontal section of 0.48. For each task, the agent receives a reward of +1 if it is within a small radius r=0.05 of the goal, and 0 otherwise.
r_t=
1, if ‖ x_t
- x_goal‖_2≤ 0.05
0, otherwise
where x_t is the location of the robot’s end effector. The agent observes single-channel images of size 64 × 64 of the
environment]. The horizon is set to 50 and we aggregate k = 2 consecutive episodes to form a trajectory of length 100.
Panda Reacher:
A Franka Panda robot tasked with placing the end effector at a goal on a 2d semi-circle of radius 0.15 with fixed z=0.15/2 in 3d-space. The task is adapted from the Reacher task in Panda Gym <cit.>, with the goal occluded. For each task, the agent receives a reward of +1 if it is within a
small radius r = 0.05 of the goal, and 0 otherwise.
r_t=
1, if ‖ x_t
- x_goal‖_2≤ 0.05
0, otherwise
where x_t is the current location of the end effector. The action space is 3-dimensional and bounded [-1, 1]^3. The agent observes a 3-channel image of size 84 × 84 of the environment. We set the horizon to 50 and aggregate k=3 consecutive episodes to form a trajectory of length 150.
Panda Wind:
This environment is identical to Panda Reacher, except that the goal is fixed and for each task the agent experiences different wind with shifts the transition function, such that for an MDP ℳ the transition function becomes
s_t+1 = s_t + a_t + w_ℳ
where w_ℳ is task specific and drawn randomly from a circle of radius 0.1.
To get to the goal and stay there, the agent must learn to quickly adapt in a way that cancels the effect of the wind.
§ IMPLEMENTATION DETAILS
In this section we outline our training process and implementation details, exact hyperparameters can be found in our code at <https://github.com/ec2604/ContraBAR>
The CPC component termed g_AR consists of a recurrent encoder, which at time step t takes as input the tuple (a_t, r_t+1, s_t+1).
The state, reward and action are passed each through a different fc layer (or a
cnn feature-extractor for the states in image-based inputs). Our CPC projection head takes in (c_t^a, z_t+k) and passes it through one hidden layer of half the input size, with an ELU activation function.
§ ARCHITECTURE DETAILS
In this section we detail practical considerations regarding the CPC architecture.
In Section <ref> we described situations where c_t does not need to encode belief regarding the task in order to distinguish between positive and negative observations. This is detrimental to learning a sound sufficient statistic as we would like c_t to encode information regarding the reward and transition functions, as they are what set apart each task. In order to prevent this “shortcut” from being used, we can perform hard negative mining. We do this by using negative observations that cannot be distinguished from the positive observation without belief regarding the transition and reward functions. In the case where only the reward functions vary, we can do this by taking the state and action of the positive observation and sampling a new reward function. We then calculate the respective reward and embed it as a negative observation alongside the original state and action. By having the positive and negative observations share the same state and action, we ensure that c_t must be informative regarding the reward function in order to distinguish between positive and negative observations. We note that in this modified setup we use s_t as the initial hidden state for the action-gru and include the original c_t as input to the CPC projection head. This ensures that the gradient of the loss with respect to the action-gru does not affect c_t, which should encode information regarding the reward function. For the case where the environments only vary in reward functions, we propose a simpler solution which is to omit the action-gru, as the future actions except for a_t+k-1 do not affect r_t+k. We can simply use (c_t,z_t+k) as input to the CPC projection head – we note that in this case z_t+k is an embedding of the reward, state and action. We found in practice that this simplification also worked well for the environments we used where the transitions varied.
The modified architecture where the action-gru is omitted can be seen in Figure <ref>. In Figure <ref> we demonstrate on the Ant-Goal environment that omitting the action-gru and reward-relabeling with the action-gru yield similar results. Finally, we note that hard-negative mining can be done for varying transitions by sampling a random transition from the prior and simulating the transition to some s_t+k given s_t, a_t,…, a_t+k.
§ IMAGE-BASED INPUTS ARE COMPUTATIONALLY RESTRICTIVE FOR VARIBAD
To understand the computational restriction in VariBAD <cit.>, we look to the formulation of the VAE objective. For every timestep t, the past trajectory τ_:t is encoded to infer the posterior q(m|τ_:t), and used by the decoder to reconstruct the entire trajectory including the future. In our analysis we restrict ourselves to the memory required for reconstruction of the reward trajectory, in an image-based domain, under the following assumptions: [label=(*)]
* images of dimension d× d × 3 are embedded to a representation of size 32 via 3 convolutions with 32 channels each and kernels of size, with strides 2,2,1 respectively.
* actions are of size 2 and embedded with a linear layer of size 16
* The trajectory is of length 120, which is average for the domains in meta RL
. We draw attention to the fact that reward decoder in VariBAD receives s_t, s_t+1 as input, requiring us to take into consideration the memory required for embedding the image trajectory. On top of this, we also consider three times the size of the parameters of the image encoder (parameters, gradients and gradient moments). We present the memory consumption as a function of the image dimensions in <ref>.
We note that in practice we often wish to decode multiple trajectories at once, and we also need to take into account encoder portion of the model as well as its gradients.
|
http://arxiv.org/abs/2306.06668v1
|
20230611123843
|
A family of interpolation inequalities involving products of low-order derivatives
|
[
"Frédéric Marbach"
] |
math.FA
|
[
"math.FA",
"26D10, 46E35"
] |
Astrophysically sourced quantum coherent photonic signals
Thomas W. Kephart
July 31, 2023
=========================================================
Gagliardo–Nirenberg interpolation inequalities relate Lebesgue norms of iterated derivatives of a function.
We present a generalization of these inequalities in which the low-order term of the right-hand side is replaced by a Lebesgue norm of a pointwise product of derivatives of the function.
§ INTRODUCTION
The symbol ≲ denotes inequalities holding up to a constant, which can depend on the parameters of the statement, but not on the unknown function.
By convenience, we restrict the statements to smooth functions, although they persist in appropriate Sobolev spaces, by usual regularization arguments.
§.§ The classical Gagliardo–Nirenberg inequality
In their full generality, the following interpolation inequalities date back to Gagliardo's and Nirenberg's respective short communications at the 1958 International Congress of Mathematicians in Edinburgh, later published in <cit.>.
We refer to <cit.> for a recent proof with an historical perspective.
Let p,q,r ∈ [1,∞], 0 ≤ k ≤ j < m ∈ and θ∈ [θ^*,1] where
θ^*:= j - k/m - k.
Assume that
1/p-j = θ( 1/r - m )
+ (1-θ) (1/q - k).
Then, for u ∈ C^∞_c(;),
D^j u _L^p()≲ D^m u _L^r()^θ D^k u _L^q()^1-θ.
In this 1D setting, estimate (<ref>) had also been derived with optimal constants by Landau <cit.>, Kolmogorov <cit.> and Stein <cit.> for particular cases of the parameters and exponents.
Usually, such inequalities are stated with k = 0.
The less frequent case k > 0 can of course be reduced to the standard case k = 0 by applying the latter to the function D^k u.
We include it in the above statement to highlight the symmetry with our generalization in <ref>.
We restrict the statements in this note to the one-dimensional case.
Mutatis mutandis in (<ref>), <ref> remains valid in ^d, or sufficiently regular bounded domains <cit.>, or even exterior domains <cit.>, up to exceptional cases of the parameters.
In particular, the case p = ∞ is only valid when d = 1 (see <cit.>).
Although historical statements only involved integer orders of the derivatives, generalizations of <ref> to fractional Sobolev spaces <cit.> or Hölder spaces <cit.> are now correctly understood.
Let 0 ≤ k_0 ≤ k and s ∈ [1,∞].
For u ∈ C^∞([0,1];), which is not necessarily compactly supported in (0,1),
D^j u _L^p(0,1)≲ D^m u _L^r(0,1)^θ D^k u _L^q(0,1)^1-θ + D^k_0 u _L^s(0,1).
The supplementary (inhomogeneous, low-order) term is necessary, as one could have D^m u ≡ 0 on (0,1) but D^j u ≠ 0 (see <ref>).
§.§ Statement of the main result
Motivated by applications to nonlinear control theory (see <ref>), our main result is the following one-dimensional generalization of <ref> in which the low-order term D^k u_L^q() of the right-hand side is replaced by a Lebesgue norm of a pointwise product of low-order derivatives of the function.
Let p,q,r ∈ [1,∞], κ∈^* and
0 ≤ k_1 ≤…≤ k_κ≤ j < m ∈.
Let k̅ := (k_1 + … + k_κ) / κ.
Let θ∈ [θ^*,1], where
θ^* := j - k̅/m - k̅.
Assume that
1/p-j = θ( 1/r - m )
+ (1-θ) (1/q κ - k̅).
Then, for u ∈ C^∞_c(;),
D^j u _L^p()≲ D^m u _L^r()^θ D^k_1 u … D^k_κ u _L^q()^(1-θ)/κ.
Heuristically, everything behaves as if the pointwise product term was replaced by D^k̅ u _L^q κ(), see also <ref>.
In the critical case θ = θ^*, relation (<ref>) is equivalent to
1/p = θ/r + 1-θ/q κ.
Let 0 ≤ k_0 ≤ k_1 and s ∈ [1,∞].
For u ∈ C^∞([0,1];), which is not necessarily compactly supported in (0,1),
D^j u _L^p(0,1)≲ D^m u _L^r(0,1)^θ D^k_1 u … D^k_κ u _L^q(0,1)^(1-θ)/κ + D^k_0 u _L^s(0,1).
Our proof is inspired by Nirenberg's historical one, as rewritten recently in <cit.>.
Compared with the usual case, we encounter two difficulties.
First, the additive version of (<ref>) now involves a compactness argument (see <ref>).
Second, and maybe more importantly, the pointwise product nature of the new term breaks the usual subdivision argument (see <ref>) since, within a small interval where u is a polynomial of low degree, this term could vanish identically.
To circumvent this difficulty, we introduce a notion of “nowhere-polynomial” function (see <ref>) and we prove that any smooth function can be approximated in this class.
An illustration of these difficulties is that pointwise multiplicative inequalities of the form |u'(x)|^2 ≲ |u(x) u”(x)| usually require to subtract from u a local polynomial approximation, and to formulate the estimate using the Hardy–Littlewood maximal functions Mu, Mu' and Mu” instead of the raw functions (see e.g. <cit.>).
§.§ Some examples
As illustrations of <ref> for small values of the parameter κ, and in view of <ref>, we state two particular cases.
Let k ∈^*.
For u ∈ W^2k,∞_0((0,1);),
D^k u _L^6(0,1)^6 ≲ D^2k u _L^∞(0,1)^2 u D^k u _L^2(0,1)^2.
This follows from <ref> with κ = 2, k̅ = k/2, θ = θ^* = k - k̅/2k - k̅ = 1/3, for which 1/6 = θ/∞ + 1-θ/2 · 2 so that (<ref>) holds.
The estimate for non-smooth u follows by standard regularization arguments.
For u ∈ W^3,∞_0((0,1);),
u”_L^12(0,1)^12≲ u”' _L^∞(0,1)^6 u u' u”_L^2(0,1)^2.
This follows from <ref> with κ = 3, k̅ = 1, θ = θ^* = 2-1/3-1 = 1/2, for which 1/12 = θ/∞+ 1-θ/2 · 3 so that (<ref>) holds.
The estimate for non-smooth u follows by standard regularization arguments.
Incidentally, this particular estimate can also be checked from the usual Gagliardo–Nirenberg estimate of <ref>
u”_L^12()≲ u”' _L^∞()^1/2 u' _L^6()^1/2
and the coercivity estimate (<ref>) proved below.
§.§ Some open problems
As mentioned in <ref>, the usual Gagliardo–Nirenberg inequalities admit generalizations in ^d.
It would be natural to investigate such generalizations of <ref>.
A difficulty in this direction might be that one has to determine the appropriate (symmetric?) generalizations of the product D^k_1 u … D^k_κ u with partial derivatives.
As mentioned in <ref>, the usual Gagliardo–Nirenberg inequalities admit generalizations in fractional Sobolev spaces. It would be natural to investigate such generalizations of <ref>, especially since, as noted in <ref>, even for integer values of the parameters, the product D^k_1 u … D^k_κ u already behaves as a fractional Sobolev norm when k̅∉.
Another particularly challenging problem concerns the possibility to relax the assumptions k_i ≤ j of <ref>.
A natural (weaker) assumption would be k̅≤ j.
In particular, one can wonder in which settings the following result holds (corresponding to j = k̅ and θ = θ^* = 0).
Let q ∈ [1,∞], κ∈^* and 0 ≤ k_1 ≤…≤ k_κ∈.
Let k̅ := (k_1 + … + k_κ) / κ.
When is it true that, for u ∈ C^∞_c(;),
D^k̅ u _L^q κ()≲ D^k_1 u … D^k_κ u _L^q()^1/κ,
where the left-hand side should be interpreted as the fractional Ẇ^k̅,qκ() semi-norm of u when k̅ is not an integer.
As noted in <ref>, positive answers to <ref> imply <ref> (up to exceptional cases) thanks to the (fractional) Gagliardo–Nirenberg inequality D^j u _L^p()≲ D^m u _L^r()^θ D^k̅ u _L^qκ()^1-θ (see e.g. <cit.> when k̅∉).
Unfortunately, the proofs of <ref> do rely on the assumptions k_i ≤ j.
In particular, estimate (<ref>) below is false if there exists i such that k_i > j.
Nevertheless, ad hoc arguments entail that (<ref>) holds for some examples, hinting that <ref> might have positive answers.
For u ∈ C^∞_c(;),
u _Ẇ^1/2, 4() ≲ u u' ^1/2_L^2(),
u' _L^4() ≲ u u”_L^2()^1/2,
u' _L^6() ≲ u u' u”_L^2()^1/3.
Estimate (<ref>) can be derived from the remark that (u^2)' = 2 u u'.
Hence
u _Ẇ^1/2, 4()≲ |u| _Ẇ^1/2, 4()≲ u^2 _Ḣ^1()^1/2
= 2 u u' _L^2()^1/2,
where the first estimate with the absolute value is derived in <cit.> and the second estimate in <cit.>.
Estimates (<ref>) and (<ref>) come from straightforward integrations by parts and the Cauchy–Schwarz inequality:
∫_ (u')^4 = - 3 ∫_ u (u')^2 u”≤ 3 ( ∫_ (u')^4 )^1/2( ∫_ (u u”)^2 )^1/2
and
∫_ (u')^6 = - 5 ∫_ u (u')^4 u”≤ 5 ( ∫_ (u')^6 )^1/2( ∫_ (u u' u”)^2 )^1/2,
which entail (<ref>) and (<ref>).
Estimate (<ref>) above is very classical, for example stated as Lemma 1 in <cit.>, which contains many interesting generalizations.
§ PROOFS ON THE REAL LINE
§.§ Sobolev inequalities with localized low-order terms
In this paragraph, we start by proving the natural statement that low-order terms in Sobolev inequalities can be localized in arbitrarily small subdomains.
Let 0 ≤ j < m ∈.
Let ω⊂ (0,1) be a non-empty open interval.
For u ∈ C^∞([0,1];),
D^j u _L^∞(ω)≲ D^m u _L^1(ω) + u _L^1(ω)
Let ω⊂ (0,1) be a non-empty open interval.
For u ∈ C^∞([0,1];),
u _L^∞(0,1)≲ u' _L^1(0,1) + u _L^1(ω).
We write ω = (x_1, x_2) with 0 ≤ x_1 < x_2 ≤ 1.
Let u ∈ C^∞([0,1];).
For any x ∈ [0,1] and x_0 ∈ (x_1,x_2),
u(x) = u(x_0) + ∫_x_0^x u'(y) y.
Hence, for any x ∈ [0,1], averaging over x_0 ∈ (x_1,x_2),
u(x) = 1/x_2-x_1∫_x_1^x_2 u(x_0) x_0
+ 1/x_2-x_1∫_x_1^x_2( ∫_x_0^x u'(y) y ) x_0,
which entails (<ref>).
Let p,q,r ∈ [1,∞].
Let 0 ≤ k ≤ j < m ∈.
Let ω⊂ (0,1) be a non-empty open interval.
For u ∈ C^∞([0,1];),
D^j u _L^p(0,1)≲ D^m u _L^r(0,1) + D^k u _L^q(ω).
By monotony of the Lebesgue spaces on the bounded domain (0,1), it is sufficient to prove the result for p = ∞ and q = r = 1.
By <ref>, D^j u _L^1(ω) = D^j-k D^k u_L^1(ω)≲D^m-k D^k u_L^1(ω) + D^k u_L^1(ω), so it is sufficient to prove the result with k = j.
Hence, up to working with D^j u instead of u, it is sufficient to prove the result with k = j = 0 and m ≥ 1. We will therefore prove
u _L^∞(0,1)≲ D^m u _L^1(0,1) + u _L^1(ω).
For m = 1, this corresponds to <ref>.
Take m > 1.
By <ref>, there exists C > 0 such that, for each i = 0, …, m-1,
D^i u _L^∞(0,1)≤ C ( D^i+1 u _L^1(0,1) + D^i u _L^1(ω)).
Multiplying these inequalities by C^i and summing over i yields
∑_i=0^m-1 C^i D^i u _L^∞(0,1)≤ C ∑_i=0^m-1 C^i ( D^i+1 u _L^1(0,1) + D^i u _L^1(ω)).
Thus, bounding the L^1 norms by L^∞ and cancelling the terms on both sides,
u _L^∞(0,1)≤ C^m D^m u _L^1(0,1) + ∑_i=0^m-1 C^i+1 D^i u _L^1(ω).
By <ref>, for each i = 0, …, m-1, D^i u_L^1(ω)≲u_L^1(ω) + D^m u_L^1(ω), which concludes the proof of (<ref>).
§.§ Sobolev inequality involving a product of derivatives
In this paragraph, we prove the following localized additive version of (<ref>), by induction on the length of the product and a compactness argument.
Let p,q,r ∈ [1,∞], κ∈^* and 0 ≤ k_1 ≤…≤ k_κ≤ j < m ∈.
Let ω⊂ (0,1) be a non-empty open interval.
For u ∈ C^∞([0,1];),
D^j u _L^p(0,1)≲ D^m u _L^r(0,1) + D^k_1u … D^k_κ u _L^q(ω)^1/κ.
Without loss of generality, up to working with D^k_1 u, one can assume that k_1 = 0.
When m = 1, k_1 = … = k_κ = j = 0, so that the statement follows from <ref>.
Hence, one can assume that m ≥ 2.
By monotony of the Lebesgue spaces on bounded domains, it is sufficient to prove the result with p = ∞ and q = r = 1.
By <ref>, D^j u _L^∞(0,1)≤ D^m u _L^1(0,1) + D^j u _L^1(0,1).
Hence, it is sufficient to prove the result with p = 1.
We proceed by induction on κ∈^*.
The case κ = 1 corresponds to <ref>.
Let κ > 1.
Assume by contradiction that the lemma holds for products of up to κ - 1 terms, but not for κ terms.
One could therefore find a sequence u_n ∈ C^∞([0,1];) such that
D^j u_n _L^1(0,1)
>
n ( D^m u_n _L^1(0,1) + u_n D^k_2 u_n … D^k_κ u_n _L^1(ω)^1/κ).
In particular, D^j u_n _L^1(0,1) > 0.
Since (<ref>) is linear in u, up to a rescaling, one can assume that
D^j u_n _L^1(0,1) + u_n _L^1(0,1) = 1.
By (<ref>), this entails that u_n is uniformly bounded in W^m,1(0,1).
Hence, by the Rellich–Kondrachov compact embedding theorem, there exists u̅∈ W^m-1,1(0,1) such that u_n →u̅ strongly in W^m-1,1(0,1).
Since the sequence converges strongly in W^j,1(0,1), the normalization (<ref>) implies
D^j u̅_L^1(0,1) + u̅_L^1(0,1) = 1
which ensures that u̅≠ 0.
By Morrey's inequality, since m-1≥ 1, u̅∈ C^0([0,1]) and u_n → u in C^0([0,1]).
Thus, since u̅≠ 0, there exists a small non-empty open interval ω' ⊂ (0,1) and δ∈ (0,1) such that, for n large enough |u_n| ≥δ on ω'.
Hence,
u_n D^k_2 u_n … D^k_κ u_n _L^1(ω)^1/κ≥δ^1/κ D^k_2 u_n … D^k_κ u_n _L^1(ω')^1/κ.
Moreover, since u_n is uniformly bounded in W^m,1(0,1), the D^k_i u_n are uniformly bounded in L^∞(0,1) by <ref>.
Hence there exists 0 < c ≤ 1 such that
δ^1/κ D^k_2 u_n … D^k_κ u_n _L^1(ω')^1/κ
≥ c D^k_2 u_n … D^k_κ u_n _L^1(ω')^1/κ-1.
Hence, substituting in (<ref>), and applying the induction hypothesis, there exists C > 0 such that,
D^j u_n _L^1(0,1) > n ( D^m u_n _L^1(0,1) + c D^k_2 u_n … D^k_κ u_n _L^1(ω')^1/κ-1)
≥n c/C D^j u_n _L^1(0,1),
which yields a contradiction for n large enough since D^j u_n _L^1(0,1) > 0.
Let p,q,r ∈ [1,∞], κ∈^* and 0 ≤ k_1 ≤…≤ k_κ≤ j < m ∈.
For u ∈ C^∞([0,1];) and I ⊂ (0,1) a non-empty interval of length ℓ,
ℓ^j - 1/p D^j u _L^p(I)≲ℓ^m - 1/r D^m u _L^r(I)
+
ℓ^k̅ - 1/qκ D^k_1u … D^k_κ u _L^q(I)^1/κ,
where k̅ := (k_1 + … + k_κ) / κ.
This is a straightforward consequence of <ref> by a scaling argument.
Indeed, write I = (x_0, x_0 + ℓ) for some x_0 ∈ [0,1).
For u ∈ C^∞([0,1];), let v ∈ C^∞([0,1],) defined by v(t) := v(x_0 + x ℓ), so that (<ref>) follows from (<ref>) with the same constant.
§.§ Nowhere-polynomial functions
In this paragraph, we introduce a notion of “nowhere-polynomial” function, as well as an approximation result by this subclass of smooth functions.
Our motivation is that we wish to interpret the pointwise product D^k_1 u … D^k_κ u as playing the role of the low-order term in the interpolation inequality and thus avoid that it vanishes on significant portions of the support of u.
Let I be a (closed or open) non-empty interval of .
We say that u ∈ C^∞(I;) is nowhere-polynomial when
μ({ u ≠ 0 }∩( ∪_i ∈^*{ D^i u = 0 }) ) = 0,
where μ denotes the Lebesgue measure on (0,1).
Let I ⊂ (0,1) be a non-empty open interval with I̅⊂ (0,1).
There exists a nowhere-polynomial ψ∈ C^∞_c((0,1);) such that ψ > 0 on I.
Let χ(t) := e^- 1/t(1-t) for t ∈ (0,1), extended by 0 on .
It is classical that χ∈ C^∞(), χ = [0,1], χ > 0 on (0,1) and that, for every i ≥ 1, D^iχ(t) = R_i(t) χ(t) where R_i is a (non-zero) rational function.
In particular, R_i vanishes at most a finite number of times on [0,1].
Thus (0,1) ∩∪_i ∈^*{ D^i χ = 0 } is countable, so of zero Lebesgue measure.
Given I = (a,b) with 0 < a < b < 1, ψ(t) := χ ((t-a)/(b-a)) satisfies the conclusions of the lemma.
Let u ∈ C^∞_c((0,1);).
There exist nowhere-polynomial functions u_n ∈ C^∞_c((0,1);) such that, for every k ∈, u_n → u in C^k([0,1];).
Let u ∈ C^∞_c((0,1);).
Let 0 < a < b < 1 such that { u ≠ 0 }⊂ (a,b).
Let ψ∈ C^∞_c((0,1);) be a nowhere-polynomial function given by <ref> such that ψ > 0 on (a,b).
For ε > 0, set u_ε := u + εψ.
As ε→ 0, for every k ∈, u_ε→ u in C^k([0,1];).
We claim that there exists a sequence ε_n → 0 such that the u_ε_n are nowhere-polynomial.
Otherwise, by contradiction, one could find ε^* > 0 such that, for every ε∈ (0,ε^*), u_ε is not nowhere-polynomial.
Hence
J_ε := { u_ε≠ 0 }∩( ∪_i ∈^*{ D^i u_ε = 0 }) ⊂ (a,b)
satisfies μ(J_ε) > 0.
Let J^i_ε := { D^i u_ε = 0 }∩ (a,b).
Since μ(J_ε) > 0, there exists i_ε∈^* such that μ(J^i_ε_ε) > 0.
Hence, (0,ε^*) = ∪_i ∈^* M_i, where
M_i := {ε∈ (0,ε^*) ; μ(J^i_ε) > 0 }.
Let i ∈^*. Let ε≠ε' ∈ (0,ε^*).
Since J_ε^i ∩ J^i_ε'⊂{ D^i ψ = 0 }∩{ψ > 0 } and ψ is nowhere-polynomial, one has μ(J^i_ε∩ J^i_ε') = 0.
Hence, for every n ∈^*, {ε∈ (0,ε^*) ; μ(J^i_ε) ≥ 1/n } is finite.
Thus M_i is a countable union of finite sets, so is countable.
Hence ∪_i∈^* M_i = (0,ε^*) is also countable, which contradicts the fact that is not countable.
Let u ∈ C^∞([0,1];).
There exist nowhere-polynomial functions u_n ∈ C^∞([0,1];) such that, for every k ∈, u_n → u in C^k([0,1];).
Let u̅∈ C^∞_c((-1,2);) be a smooth compactly supported extension of u.
We apply <ref> to a rescaled version of u̅ to obtain a sequence u̅_n ∈ C^∞_c((-1,2);) of nowhere-polynomial functions such that u̅_n →u̅ in C^k([-1,2];) for every k ∈.
Then the sequence of restrictions u_n := (u̅_n)_| [0,1] satisfies the claimed properties.
§.§ Subdivision argument
In this paragraph, we prove that, given a nowhere-polynomial function, we can find a subdivision of its support such that, on each interval, both terms of the right-hand side of (<ref>) are equal.
The proof is inspired by <cit.> and relies on the following version of Besicovitch's covering theorem <cit.>.
Let E be a bounded subset of and r : E → (0,+∞).
For x ∈ E, consider the non-empty open interval I_x := (x-r_x,x+r_x).
There exists a countable (finite or countably infinite) collection of points x_n ∈ E such that E ⊂∪_n I_x_n and ∑_n 1_I_x_n≤ 4 on .
This statement corresponds to the one-dimensional case of <cit.> (see also <cit.>).
Let q,r ∈ [1,∞], κ∈^* and 0 ≤ k_1 ≤…≤ k_κ < m ∈.
Assume that k̅ < m - 1 where k̅ := (k_1 + … + k_κ) / κ.
Let u ∈ C^∞_c((0,1);) be a nowhere-polynomial function.
Then there exists a countable family (I_n)_n of non-empty open intervals I_n ⊂ such that
1 ≤∑_n 1_I_n μ a.e. on { u ≠ 0 },
∑1_I_n≤ 4
on ,
and, for every n, denoting by ℓ_n the length of I_n,
ℓ_n^m - 1/r D^m u _L^r(I_n)
=
ℓ_n^k̅ - 1/qκ D^k_1u … D^k_κ u _L^q(I_n)^1/κ.
Let u ∈ C^∞_c((0,1);) be nowhere-polynomial and
v := D^k_1 u … D^k_κ u (we implicitly consider their respective smooth extensions by 0 outside of (0,1)).
Let
E := { x ∈ (0,1) ; u(x) ≠ 0 and v(x) ≠ 0 }.
For x ∈ E and h > 0, we define
α_x(h) := h^k̅ - 1/qκ v ^1/κ_L^q(x-h,x+h),
β_x(h) := h^m-1/rD^m u _L^r(x-h,x+h).
As h → 0, α_x(h) ∼ |v(x)|^1/κ 2^1/qκ h^k̅ and β_x(h) ≤ h^mD^m u_L^∞(0,1).
Thus, since m > k̅, β_x(h) < α_x(h) for h small enough.
Conversely, for h ≥ 1, α_x(h) = h^k̅ - 1/qκ v ^1/κ_L^q(0,1) and β_x(h) = h^m-1/rD^m u_L^r(0,1).
Since m > k̅ + 1, m - 1/r > k̅ - 1/qκ and thus β_x(h) > α_x(h) for h large enough.
Hence, we can define
r_x := inf{ h > 0 ; α_x(h) ≤β_x(h) }∈ (0,+∞).
In particular, for every x ∈ E, α_x(r_x) = β_x(r_x).
By <ref>, there exists a countable collection of elements x_n ∈ E such that E ⊂∪_n I_n and ∑_n 1_I_n≤ 4 on , where I_n = (x_n - r_x_n, x_n + r_x_n).
These intervals satisfy (<ref>) by the definition of r_x.
Moreover, since 1 ≤∑_n 1_I_n on E, writing
{ u ≠ 0 } = ( { u ≠ 0 }∩{ v = 0 }) ∪ E
and using the fact that u is nowhere-polynomial, we obtain that 1 ≤∑_n 1_I_n almost everywhere on { u ≠ 0 }, which proves (<ref>).
§.§ Proof of the main result
We start with a classical result from measure theory.
Let 1 ≤ p < ∞ and j ∈.
For u ∈ C^∞([0,1];),
D^j u _L^p(0,1)^p = ∫_0^1 |D^j u|^p 1_u ≠ 0.
We write
D^j u _L^p(0,1)^p
= ∫_0^1 |D^j u|^p 1_u ≠ 0
+ ∫_0^1 |D^j u|^p 1_u = 01_D^j u ≠ 0.
Thus, it is sufficient to prove that E := { u = 0 }∩{ D^j u ≠ 0 } is of zero Lebesgue measure.
Let us show that E is a discrete subset of [0,1].
Let x ∈ E.
Then u(x) = 0.
Let 1 ≤ i ≤ j be the smallest integer such that D^i u(x) ≠ 0.
For h small enough u(x+h) = D^i u(x) h^i / i! + O(h^i+1).
In particular, there exists h small enough such that u(x + h) = 0 if and only if h = 0.
Thus x is isolated in E.
Hence E is discrete and μ(E) = 0, which concludes the proof.
We now prove <ref>.
Since estimate (<ref>) is invariant under translation and rescalings, one can assume that u ∈ C^∞_c((0,1);).
We start with the most important case: θ = θ^*.
We postpone the generalization to θ∈ (θ^*,1] to the end of this section.
Proof in the critical case θ = θ^*.
Assume moreover, temporarily, that k̅ < m-1 and p, q, r < ∞.
Let u ∈ C^∞_c((0,1);).
As a first step, assume that u is nowhere-polynomial.
Let v := D^k_1u … D^k_κ u.
Let (I_n)_n be a countable collection of non-empty open intervals such as in <ref>.
First, using <ref> and (<ref>),
D^j u _L^p(0,1)^p = ∫_0^1 |D^j u|^p 1_u ≠ 0≤∑_n ∫_0^1 |D^j u|^p 1_I_n
Second, using <ref> and (<ref>), there exists C > 0 (independent of u) such that, for each n,
D^j u _L^p(I_n)^p
≤ C^p ℓ_n^1-pj(
ℓ_n^m - 1/r D^m u _L^r(I_n)
+
ℓ_n^k̅ - 1/qκ v _L^q(I_n)^1/κ)^p
= C^p 2^p ℓ_n^1-pj(ℓ_n^m - 1/r D^m u _L^r(I_n))^θ p(ℓ_n^k̅ - 1/qκ v _L^q(I_n)^1/κ)^(1-θ)p
= (2C)^p D^m u _L^r(I_n)^θ p v _L^q(I_n)^p(1-θ)/κ
since the parameters are related by (<ref>).
Since θ = θ^*, the relation (<ref>) of <ref> implies that the exponents α = r/θ p and α' = q κ/p(1-θ) satisfy 1/α+1/α' = 1.
Thus, by Hölder's inequality,
∑_n D^j u _L^p(I_n)^p
≤ (2C)^p ∑_n D^m u _L^r(I_n)^θ p v _L^q(I_n)^p(1-θ)/κ
≤ (2C)^p ( ∑_n D^m u _L^r(I_n)^r)^θ p/r( ∑_n v _L^q(I_n)^q )^p(1-θ)/q κ
≤ (2C)^p 4^θ p/r4^p(1-θ)/q κ D^m u _L^r(0,1)^θ pv _L^q(0,1)^p(1-θ)/κ
using (<ref>).
Substituting this estimate in (<ref>) proves (<ref>).
If u is not nowhere-polynomial, then one applies (<ref>) to the approximation sequence u_n of nowhere-polynomial functions given by <ref>.
Since u_n → u in C^m([0,1];), the estimate passes to the limit.
When q = ∞ or r = ∞, it suffices to replace the Hölder estimate in (<ref>) involving a sum by the appropriate supremum over n.
When p = ∞, one writes
D^j u _L^∞(0,1)
= sup_n D^j u _L^∞(I_n),
where, similarly, using <ref> and (<ref>), there exists C > 0 (independent of u) such that,
D^j u _L^∞(I_n) ≤ C ℓ_n^-j(
ℓ_n^m - 1/r D^m u _L^r(I_n)
+
ℓ_n^k̅ - 1/qκ v _L^q(I_n)^1/κ)
= C D^m u _L^r(I_n)^θ v _L^q(I_n)^(1-θ)/κ.
Eventually, when k̅ = m-1, the assumption k_i ≤ j < m entail that k_1 = … = k_κ = j = m - 1.
Hence θ^* = 0 and p = q κ by (<ref>).
Thus (<ref>) reduces to D^m-1 u_L^p(0,1)≲ D^m-1 u … D^m-1 u _L^q(0,1)^1/κ = D^m-1 u_L^p(0,1).
Proof in the case θ∈ (θ^*,1].
When θ = 1, this simply corresponds to the embedding W^1,1(ℝ) ↪ L^∞() for compactly supported functions.
Now let θ∈ (θ^*,1) and define p^* ∈ [1,∞] by
1/p^* = θ^*/r + 1-θ^*/qκ.
Thanks to the critical case θ = θ^*, we know that
D^j u _L^p^*(0,1)≲ D^m u _L^r(0,1)^θ^* v _L^q(0,1)^1-θ^*/κ.
Define α∈ (0,1) by
α := θ-θ^*/1-θ^*.
We apply the usual Gagliardo–Nirenberg inequality of <ref> to obtain
D^j u _L^p(0,1)≲ D^m u _L^r(0,1)^α D^j u _L^p^*(0,1)^1-α.
Combining (<ref>) and (<ref>) proves (<ref>).
Thus, it only remains to check that the parameters satisfy (<ref>) so that we could indeed apply <ref>.
And, indeed, by (<ref>) and (<ref>),
α ( 1/r - m ) + (1-α) ( 1/p^* - j ) - ( 1/p - j )
= α( 1/r - m - 1/p^* + j )
+ 1/p^* - 1/p
= θ-θ^*/1-θ^*( 1/r - m - θ^*/r - 1-θ^*/qκ + j )
+ θ^*/r + 1-θ^*/qκ - 1/p
= θ( 1/r - 1/qκ - m-j/1-θ^*) + (θ^*/1-θ^* (m-j) + 1/qκ - 1/p) = 0,
since θ satisfies (<ref>).
In the last line we used that (m-j)/(1-θ^*) = m - k̅ and θ^* (m-j)/(1-θ^*) = j - k̅, by (<ref>).
§ THE CASE OF BOUNDED DOMAINS
In this paragraph, we consider the case u ∈ C^∞([0,1];), but not necessarily compactly supported in (0,1), by adding a low-order term to the estimates.
The proofs rely on the distinction between two cases, depending on whether u is mostly “low-frequency” or “high-frequency”.
§.§ A slight extension of the usual inequality
We prove <ref>.
Estimate (<ref>) is classical when k_0 = k (it follows by applying the usual inequality to D^k u, see e.g. <cit.>).
We build upon this case to give a short proof when 0 ≤ k_0 < k ≤ j.
Up to working with D^k_0 u, it is sufficient to treat the case k_0 = 0.
Case 0 < k < j.
Define α^* ∈ (0,1) and p_α^* ∈ [1,∞] by
α^* := k/j and 1/p_α^* := α^*/p + 1-α^*/s.
By <ref> (in the classical case k_0 = k), one has both
D^j u _L^p(0,1) ≤ C_1 D^m u _L^r(0,1)^θ D^k u _L^q(0,1)^1-θ
+ C_1 D^k u _L^p_α^*(0,1),
D^k u _L^p_α^*(0,1) ≤ C_2 D^j u _L^p(0,1)^α^* u _L^s(0,1)^1-α^* + C_2 u _L^s(0,1).
By Young's inequality for products, for ε > 0,
D^j u _L^p(0,1)^α^* u _L^s(0,1)^1-α^*≤α^* ε D^j u _L^p(0,1)
+ (1-α^*) ε^-α^*/1-α^* u _L^s(0,1).
Choosing ε < (C_1 C_2 α^*)^-1 and combining the three estimates proves (<ref>).
Low-frequency case when k = j.
Let u ∈ C^∞([0,1];).
Assume that
D^m u _L^r(0,1)≤ D^j u _L^p(0,1).
Define β^* ∈ (0,1) and p_β^* ∈ [1,∞] by
β^* := j/m and 1/p_β^* := β^*/r + 1-β^*/s.
By <ref> (in the classical case k_0 = k), one has both
D^j u _L^p(0,1) ≲ D^m u _L^r(0,1)^θ D^j u _L^q(0,1)^1-θ + D^j u _L^p_β^*(0,1),
D^j u _L^p_β^*(0,1) ≲ D^m u _L^r(0,1)^β^* u _L^s(0,1)^1-β^* + u _L^s(0,1).
Combining both estimates with assumption (<ref>) and using Young's inequality as above proves (<ref>).
High-frequency case when k=j.
Let u ∈ C^∞([0,1];).
Assume that
D^m u _L^r(0,1)≥ D^j u _L^p(0,1).
By <ref> (in the classical case k_0 = k),
D^j u _L^p(0,1)≲ D^m u _L^r(0,1)^θ D^j u _L^q(0,1)^1-θ + D^j u _L^1(0,1).
By Hölder's inequality and (<ref>),
D^j u _L^1(0,1)≤D^j u_L^p(0,1)^θ D^j u _L^q(0,1)^1-θ≤ D^m u _L^r(0,1)^θ D^j u _L^q(0,1)^1-θ.
Hence (<ref>) entails (<ref>).
§.§ Proof of the main result for bounded domains
We turn to the proof of <ref>.
We start with the following modification of <ref> (which removes the compact support assumption).
Let q,r ∈ [1,∞], κ∈^* and 0 ≤ k_1 ≤…≤ k_κ < m ∈.
Let k̅ := (k_1 + … + k_κ) / κ.
Let u ∈ C^∞([0,1];) be nowhere-polynomial such that
D^k_1 u … D^k_κ u_L^q(0,1)^1/κ≤ D^m u _L^r(0,1).
There exists a countable family (I_n)_n of non-empty open intervals I_n ⊂ (0,1) satisfying (<ref>), (<ref>) on [0,1] and (<ref>).
Let u ∈ C^∞([0,1];) be nowhere-polynomial and
v := D^k_1 u … D^k_κ u.
Let E as in (<ref>).
For x ∈ E and h > 0, we define
α_x(h) := |J_x(h)|^k̅ - 1/qκ v ^1/κ_L^q(J_x(h)),
β_x(h) := |J_x(h))|^m-1/rD^m u _L^r(J_x(h)),
where J_x(h) := (x-h,x+h) ∩ (0,1).
Since x ∈ (0,1), for h small enough J_x(h) = (x-h,x+h) and |J_x(h)| = 2h.
As h → 0, α_x(h) ∼ |v(x)|^1/κ 2^1/qκ h^k̅ and β_x(h) ≤ h^mD^m u_L^∞(0,1).
Thus, since m > k̅, β_x(h) < α_x(h) for h small enough.
Conversely, for h ≥max{ x, 1-x }, J_x(h) = (0,1) and α_x(h) = v ^1/κ_L^q(0,1) and β_x(h) = D^m u_L^r(0,1).
Thus, by (<ref>), α_h(x) ≤β_h(x).
Hence, for every x ∈ E, we can define r_x ∈ (0,∞) as in (<ref>), which satisfies α_x(r_x) = β_x(r_x).
By <ref>, there exists a countable collection of elements x_n ∈ E such that E ⊂∪_n I_n' and ∑_n 1_I_n'≤ 4 on , where I_n' = (x_n - r_x_n, x_n + r_x_n).
Let I_n := I_n' ∩ (0,1).
The intervals I_n satisfy (<ref>) by the definitions of r_x and of J_x(r_x).
Moreover, since 1 ≤∑_n 1_I_n' on E, writing
{ u ≠ 0 }∩ (0,1) = ( { u ≠ 0 }∩{ v = 0 }∩ (0,1) ) ∪ E
and using the fact that u is nowhere-polynomial, we obtain that 1 ≤∑_n 1_I_n almost everywhere on { u ≠ 0 }∩ (0,1), which proves (<ref>).
In the situation where D^m u is small compared with D^k_1 u … D^k_κ u, the construction of the family of intervals of the previous lemma fails.
We will rely on the following estimate instead.
Let p,q,r ∈ [1,∞], κ∈^* and
0 ≤ k_1 ≤…≤ k_κ≤ j < m ∈.
Let k̅ := (k_1 + … + k_κ) / κ.
Let 0 ≤ k_0 ≤ k_1 and s ∈ [1,∞].
For u ∈ C^∞([0,1];ℝ) such that
D^m u _L^r(0,1)≤ 2 D^k_1 u … D^k_κ u_L^q(0,1)^1/κ,
there holds,
D^j u _L^p(0,1)≲ D^k_0 u _L^s(0,1).
By monotony of the Lebesgue spaces on the bounded domain (0,1), it is sufficient to prove the result for p = ∞ and s = 1.
Moreover, since 0 ≤ k_0 ≤ k_1 ≤…≤ j < m, up to working with D^k_0 u instead of u, one can assume that k_0 = 0.
By the usual Gagliardo–Nirenberg inequality on bounded domains of <ref> (with k = k_0 = 0),
D^j u _L^∞(0,1)≲ D^m u ^α_L^r(0,1) u _L^1(0,1)^1-α
+ u _L^1(0,1)
with α := j+1/m+1+1/r≥j/m since j < m and r ≥ 1.
Moreover, for each 0 ≤ k_i ≤ j,
D^k_i u _L^∞(0,1)≲ D^j u _L^∞(0,1) + u _L^1(0,1),
which is immediate when k_i = j and follows from the usual Sobolev embedding <ref> when k_i < j.
Thus, by Hölder's inequality,
D^k_1 u … D^k_κ u _L^q(0,1)^1/κ ≤ D^k_1 u_L^∞(0,1)^1/κ… D^k_κ u_L^∞(0,1)^1/κ
≲ D^j u _L^∞(0,1) + u _L^1(0,1).
Using (<ref>) and substituting (<ref>) in (<ref>) proves that
D^j u _L^∞(0,1)≲ D^j u _L^∞(0,1)^α u _L^1(0,1)^1-α + u _L^1(0,1).
Since j <m, α < 1 and Young's weighted inequality for products entails that
D^j u _L^∞(0,1)≲ u _L^1(0,1),
which is indeed (<ref>) with p = ∞, s = 1 and k_0=0.
We are now ready to prove <ref>.
Reduction to the case D^m u large.
Let u ∈ C^∞([0,1];).
If u satisfies (<ref>), then estimate (<ref>) of <ref> implies (<ref>) since the low-order term by itself is sufficient to bound the left-hand side.
Hence, we can focus on the case where D^m u is large.
Proof in the critical case θ = θ^*.
Let u ∈ C^∞([0,1];) be a nowhere-polynomial function such that
D^m u _L^r(0,1)≥ 2 D^k_1 u … D^k_κ u_L^q(0,1)^1/κ.
In particular, assumption (<ref>) is satisfied, so <ref> applies.
Thus the same argument as in <ref> can be applied and proves that
D^j u _L^p(0,1)≲ D^m u _L^r(0,1)^θ D^k_1 u … D^k_κ u_L^q(0,1)^1/κ.
If u is not nowhere-polynomial, then one applies (<ref>) to the approximation sequence u_n of nowhere-polynomial functions given by <ref>.
Since u_n → u in C^m([0,1];) and u satisfies (<ref>), the u_n satisfy assumption (<ref>) for n large enough, and the estimate passes to the limit.
Proof in the case θ∈ (θ^*,1].
When θ = 1, this simply corresponds to the embedding W^1,1(0,1) ↪ L^∞(0,1).
Now let θ∈ (θ^*,1) and define p^* ∈ [1,∞] by (<ref>).
Thanks to the critical case θ = θ^*, we have the bound (<ref>) with v := D^k_1 u … D^k_κ u.
Define α∈ (0,1) by (<ref>).
Recalling the relation between the parameters verified in (<ref>), we apply the usual Gagliardo–Nirenberg inequality of <ref> (in its general setting proved in <ref>) to obtain
D^j u _L^p(0,1)≲ D^m u _L^r(0,1)^α D^j u _L^p^*(0,1)^1-α + D^k_0 u _L^s(0,1).
Combined with (<ref>), this concludes the proof of (<ref>).
§ AN APPLICATION TO CONTROL THEORY
Our initial motivation concerns obstructions to small-time local controllability for nonlinear finite-dimensional scalar-input control-affine systems.
It is known that such obstructions are linked with interpolation inequalities (see <cit.>).
As an example, given p ∈^*, consider the following system on ^4:
ẋ_1 = w
ẋ_2 = x_1
ẋ_3 = x_2
ẋ_4 = x_1^2 x_2^2 x_3^2 - x_1^p
with initial condition x(0) = 0 where w ∈ L^∞((0,T);) is the control to be chosen.
We are interested in the following local property.
We say that system (<ref>) is small-time locally controllable when, for every T,η > 0, there exists δ > 0 such that, for every x^* ∈^4 with |x^*| ≤δ, there exists w ∈ L^∞((0,T);) such that w_L^∞(0,T)≤η and the associated solution to (<ref>) with initial condition x(0) = 0 satisfies x(T) = x^*.
System (<ref>) is small-time locally controllable if and only if p ∈{ 3, 5, 7, 8, 9, 10, 11 }.
Let T > 0.
If w ∈ L^∞((0,T);) is a control such that x_1(T) = x_2(T) = x_3(T) = 0, then u := x_3 ∈ W^3,∞_0((0,T);) and
x_4(T) = ∫_0^T (u u' u”)^2 - ∫_0^T (u”)^p
so that the possibility to reach a target of the form (0,0,0,± 1) is linked with functional inequalities involving products of derivatives.
We study each case.
* Case p ≥ 12.
First,
u”_L^p(0,T)^p
≤ u”_L^∞(0,T)^p-12 u”_L^12(0,T)^12
Moreover, since u”' = x_3”' = w and u”(0) = x_1(0) = 0,
u”_L^∞(0,T)≤ T w _L^∞(0,T).
Thus, thanks to the interpolation inequality of <ref>,
u”_L^p(0,T)^p
≤ T^p-12 w _L^∞(0,T)^p-6∫_0^T (u u' u”)^2.
Substituting in (<ref>) proves that x_4(T) ≥ 0 when T^p-12 w _L^∞(0,T)^p-6≤ 1.
Thus, choosing 0 < η≪ 1 such that T^p-12η^p-6≤ 1 negates the definition of small-time local controllability.
* Case 7 ≤ p ≤ 11.
Let 0 ≠χ∈ C^∞_c((0,T);) and consider w(t) := εχ”'(t) for 0 < ε≪ 1.
As ε→ 0, w → 0 in L^∞((0,T);).
Moreover, by (<ref>),
x_4(T) = ε^6 ∫_0^T (χχ' χ”)^2 + O(ε^7).
So one can move in the direction (0,0,0,+1).
Conversely, set u(t) = ε^1+3aχ(tε^-a) or equivalently w(t) := εχ”'(t ε^-a) for a > 0 and 0 < ε≪ 1.
As ε→ 0, w → 0 in L^∞((0,T);).
By (<ref>),
x_4(T) = ε^7+12a∫_0^T (χχ' χ”)^2 - ε^p(1+a)+1∫_0^T (χ”)^p.
If ∫_0^T (χ”)^p > 0 and p(1+a) < 6+12a (which is possible when 7 ≤ p ≤ 11), one can move in the direction (0,0,0,-1).
From these elementary movements, it is classical to conclude that (<ref>) is small-time locally controllable.
* Case p = 1.
Then ẋ_2 + ẋ_4 = (x_1 x_2 x_3)^2 ≥ 0.
Hence, for every control, (x_2 + x_4)(T) ≥ 0 so targets with x_2^* + x_4^* < 0 are not reachable.
* Case p ∈{ 2, 4, 6 }.
The system does not satisfy Stefani's necessary condition for small-time local controllability (see <cit.>).
* Case p ∈{ 3, 5 }.
The system satisfies Hermes' sufficient condition for small-time local controllability (see <cit.>).
§ ACKNOWLEDGEMENTS
The author is deeply indebted to Karine Beauchard for numerous discussions and preliminary results on particular cases of <ref>, as well as for the underlying control-theoretic motivation.
The author also thanks Frédéric Bernicot and Cristina Benea for encouraging discussions concerning <ref>.
The author is supported by ANR-20-CE40-0009 and ANR-11-LABX-0020.
plain
|
http://arxiv.org/abs/2306.07795v2
|
20230613141707
|
Efficient GPU Implementation of Affine Index Permutations on Arrays
|
[
"Mathis Bouverot-Dupuis",
"Mary Sheeran"
] |
cs.DC
|
[
"cs.DC"
] |
Optimal usage of the memory system is a key element of fast GPU algorithms. Unfortunately many common algorithms fail in this regard despite exhibiting great regularity in memory access patterns. In this paper we propose efficient kernels to permute the elements of an array. We handle a class of permutations known as Bit Matrix Multiply Complement (BMMC) permutations, for which we design kernels of speed comparable to that of a simple array copy. This is a first step towards implementing a set of array combinators based on these permutations.
Logic and computation as combinatorics
Norihiro Yamada
[email protected]
University of Minnesota
July 31, 2023
===================================================================
§ INTRODUCTION
In GPU algorithms, memory access is often the performance bottleneck. Consider the following low-level GPU kernel that transforms an array of size 2^n :
Here, 2^n threads are launched that each read and write a single element. The read position is computed using the function that performs bit-reversal on the index i viewed as a list of n bits, so that (4, 7) transforms 7 = 00111 to 14 = 01110.
Behind this deceivingly simple access pattern lies terrible performance; the read is typically an order of magnitude slower than the write on modern GPUs. Despite the great degree of regularity present in this memory access pattern, it yields uncoalesced memory accesses that force the reads from different threads to be serialized.
While bit reversal often has a hardware or low level implementation, many other transformations (such as those in sorting networks) exhibit a similar degree of regularity that is not fully exploited by GPUs. To this end, we use an alternative way of describing array indexing that allows many regular access patterns to be compiled to efficient GPU code. We view indices into an array of size 2^n as binary vectors of size n (vectors in F_2^n) and focus on affine transformations in F_2^n, the so-called Binary Matrix Multiply and Complement (BMMC) transformations <cit.>. We study the BMMC permutations because they enable reasoning about and implementation of the sets of combinators that we have earlier considered for both software and hardware design <cit.>.
The contributions of this paper are as follows :
* We show how to efficiently implement a specific class of array permutations where the mapping between indices is given by a BMMC[The code to generate and benchmark our CUDA kernels is publicly available at <https://github.com/MathisBD/bmmc-perms-gpu>.].
* We conduct an empirical evaluation of our kernels, both in the worst case and the average case.
* We show preliminary work on using BMMC permutations to compile high level array combinators.
More precisely, we show how to implement a subclass of BMMC permutations - namely tiled BMMC permutations - almost as fast as a simple array copy, and how to factorize any BMMC as the product of at most two tiled BMMCs.
§ BACKGROUND : GPU PROGRAMMING
§.§ A simple GPU model
This section presents the relevant parts of a simple GPU model that we will use to justify our optimizations. There are two key aspects to this model : the execution model and the memory hierarchy. For a more in depth discussion of a similar machine model we refer the reader to chapter 4 "Parallelism and Hardware Constraints" of Henriksen's thesis on Futhark <cit.>.
Regarding terminology, there are unfortunately two distinct sets of terms; we will be using the CUDA set, which differs from but also overlaps with the OpenCL set.
The execution model follows an SIMT (single instruction multiple threads) design; a large number of threads are launched concurrently, all executing the same code. Threads are uniquely identified by a thread identifier, which often dictates how they will behave. They are organized according to the following hierarchy :
* Kernels are the top-level scheduling unit : all threads in a kernel execute the same code. To obtain good performance it is necessary that a kernel have many threads (typically at least a 100 thousand), and in general there is no kernel-level synchronization possible between threads. A GPU program consists of one or several kernels that are run sequentially.
* Thread blocks are the unit at which thread synchronization - whether it be memory or execution synchronization - can happen. In kernels where the threads do not need synchronization (map-like kernels), the thread group is mostly irrelevant. Maximum thread block size is hardware dependent : typical sizes are 256 and 1024 threads for AMD and NVIDIA GPUs respectively.
* Warps form the basic unit for execution and scheduling. Threads inside a single warp execute instructions in lockstep, including memory access instructions, so that all memory transactions of a warp must have completed before it can advance to the next instruction. Warp size is hardware dependent, although 32 threads is typical.
Kernels usually launch many more threads than can be run concurrently. In this case, threads are launched one thread block at a time, with new thread blocks being swapped in as previous blocks finish execution. The order in which blocks are scheduled is by increasing thread identifier : this means that at any given time the threads currently in flight cover a contiguous subset of the thread identifiers.
The other side of the coin is the GPU memory hierarchy, which reflects the thread hierarchy :
* Global memory is large off-chip memory (typically on the order of several GiB). This is where the CPU copies data to and from, and is where the inputs and outputs to a kernel reside. If accessed properly global memory has a much larger bandwidth than usual CPU RAM.
* Shared memory is smaller and shared by all threads in a thread group. It usually functions as a cache used by thread blocks : however unlike traditional caches, the programmer is responsible for loading data in and out of shared memory.
* Registers are small bits of memory private to each thread. Although very fast, the number of registers per thread is limited. Kernels that require many registers per thread will cause fewer threads to run concurrently.
§.§ Optimizing memory access
In contrast to CPUs, GPU programmers must manually manage most of the memory hierarchy in order to get the best performance. Hardware managed caches, while also present on GPUs, are of less importance; most performance benefits come from mechanisms that allow certain memory transactions to be answered concurrently, known as coalesced and bank conflict free memory accesses.
Global memory is divided into contiguous segments - typically 32, 64 or 128 bytes - that form the basic unit for memory transactions (see Figure <ref>). The size of a segment is much larger than what can be accessed by a single thread in a given instruction, and in general the memory transactions needed for the individual threads in a warp are serialized. However modern GPUs ensure that the memory accesses from a given warp that fall in the same segment are coalesced into one transaction (the order of addresses within a segment does not matter). To obtain optimal memory performance the set of segments accessed by a warp must be as small as possible. Memory access patterns that fail to exploit coalescing can lead to over an order of magnitude decrease in bandwidth.
Shared memory is divided into banks (typically 32). Contrary to global memory segments, shared memory banks are not contiguous but rather interleaved at the 32-bit word granularity : see Figure <ref> for an illustration. Accesses by a warp that fall in the same memory bank must be serialized, but accesses to different banks can be answered concurrently. If threads within a warp access the memory banks in an imbalanced way, a bank conflict occurs, potentially causing a decrease in shared memory bandwidth of up to 32 times.
§.§ An example : matrix transposition
To help gain some intuition on GPU programming we walk through an example kernel. Let M be a two-dimensional matrix of size (N, N). We would like to write a kernel that performs matrix transposition on M : M[i, j] ← M[j, i] for all i and j, and M is stored in row major order in both the input and output.
If we assume that N is a power of two, we can write the following kernel (in CUDA-like pseudocode) :
The variables , and are three-dimensional vectors that store, for each thread, the corresponding block index, block size and thread index within its block.
We invoke this kernel with a grid of (N/32)*(N/32) thread blocks with each thread block being of size 32*32. When using a two-dimensional indexing scheme for thread blocks (as is done here) the index of a thread within its block is given by , and warps correspond to bundles of 32 threads that have contiguous indices. In this case, each warp corresponds to a single value for and 32 contiguous values for . This means that the first memory access (reading the input) is fully coalesced, but the second memory access (writing the output) is not.
To ensure that both memory accesses are coalesced we can make use of shared memory. Each thread block will process a square tile of the input of size 32*32 (compare this to the naive kernel where each block processes a contiguous patch of the input, see figure <ref> for an illustration). When reading in the tile, each warp will process a single row of the tile, but when writing out the tile, each warp will process a single column of the tile :
Each thread group uses an array of size 32 * 32 in shared memory. We have to manually threads within a thread block so that the tile for this block is fully populated before we start writing out. The tile processed by a given block has its upper left corner at position (, ) in the input, which corresponds to the tile with upper left corner (, ) in the output.
We measured the performance of the above transpose kernels for matrices of size 2^15 * 2^15 on an NVIDIA RTX4090 GPU. The effective memory bandwidth achieved in each case is computed by comparing the running time to that of a simple copy kernel :
kernel running time effective bandwidth
copy 9.3ms 100%
naive transpose 26.4ms 35.2%
tiled transpose 12.2ms 76.2%
The tiled version is over twice as fast as the naive version. Further optimizations can bring the running time even closer to the copy kernel : we refer the interested reader to the NVIDIA tutorial <cit.>.
§ KEY IDEAS
Viewing indices into arrays of size 2^n as binary vectors of length n allows us to restrict our attention to certain well-behaved transformations on indices. Arguably the simplest transformations according to this point of view are linear and affine mappings, i.e. mappings between source indices x and target indices y such that :
y = A x + c
where A is an (n, n) binary matrix, c is a binary vector of length n and all arithmetic is done modulo 2 (i.e. in F_2 the finite field with two elements).
If we expand this formula, each bit of y is given by :
y_i = ( ∑_0 ≤ j < n a_ij x_j ) + c_i
Many common transformations on indices can in fact be expressed in this way. For instance, transposing a matrix of size 4 * 4 can be expressed as follows :
y_i = x_[ y_0; y_1; y_2; y_3 ] =
[ 0 0 1 0; 0 0 0 1; 1 0 0 0; 0 1 0 0 ][ x_0; x_1; x_2; x_3 ] +
[ 0; 0; 0; 0 ]
The above matrix has exactly one non-zero entry per row and per column. Invertible matrices of this form are called permutation matrices and simply permute the bits of the input index. In the above example, the index with bits [ x_0, x_1, x_2, x_3 ] is mapped to [ x_2, x_3, x_0, x_1 ], so that index 6 = 00110 is mapped to index 9 = 01001.
When the matrix A is a permutation matrix and the complement vector c is 0 we call (A, c) a Bit Permute (BP) transformation. Bit-reversal is thus a BP transformation :
y_i = x_n-1-i[ y_0; y_1; y_2; y_3 ] =
[ 0 0 0 1; 0 0 1 0; 0 1 0 0; 1 0 0 0 ][ x_0; x_1; x_2; x_3 ] +
[ 0; 0; 0; 0 ]
The complement vector is also useful. Here is an example of using it to define a transformation that reverses an array of size 16 :
y_i = x_i + 1
[ y_0; y_1; y_2; y_3 ] =
[ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1 ][ x_0; x_1; x_2; x_3 ] +
[ 1; 1; 1; 1 ]
In this case the matrix A is also a permutation matrix (corresponding to the identity permutation) but the complement vector c is non-zero : we call such a transformation a Bit Permute Complement (BPC).
In the most general case, A is any invertible matrix (over F_2) and c is any vector, giving a Bit Matrix Multiply Complement (BMMC) transformation. The invertibility requirement for A ensures that the transformation defines a permutation on arrays. While not all permutations on arrays can be expressed in such a way, the preceding examples should convince the reader that this class includes many of the common cases.
Note that permutations on an array whose size is not a power of 2 do not fall in the BMMC class. For instance transposing a matrix of size (7, 5) is not a BMMC permutation, whereas transposing a matrix of size (16, 8) is.
BMMCs were studied in the context of data-parallel programming in the 1990s by Cormen, Edelman and their co-authors. Both exploited the power of linear algebra, such as various matrix decompositions or Gaussian elimination, inspiring this work <cit.>. For example, Cormen proposed asymptotically optimal implementations for BMMC permutations on the disk I/O model <cit.>.
We aim to use BMMC permutations to provide an efficient implementation for high-level combinators that allow the programmer to describe data access patterns concisely. In this paper, we give one example of such a combinator, called . The expression partitions the array of size 2^n into two equally sized subarrays according to the n bit , applies to each subarray and stitches the resulting arrays back together according to the . The element at index is assigned to the first or second subarray according to the dot product * in F_2 : see Figure <ref> for examples.
In section <ref>, we show how to efficiently implement the combinator in terms of BMMC permutations. In fact, for any mask m we can find a matrix A such that :
m f = (A^-1, 0) ∘ 2^n-1 f ∘ (A, 0)
where 2^n-1 f applies f to the first and second halves of the input array, and composition is from right to left. For instance, the BMMC corresponding to is
[ y_0; y_1; y_2; y_3 ] =
[ 0 1 0 0; 0 0 1 0; 0 0 0 1; 1 1 0 0 ][ x_0; x_1; x_2; x_3 ] +
[ 0; 0; 0; 0 ]
We are currently working on implementing and related combinators in the Futhark programming language, a high-level functional language that can compile to efficient GPU code <cit.>. These combinators benefit greatly from fusion rules such as the following :
(A, c) ∘ (B, d) = (AB, Ad + c)
Some challenges arise when compiling uses of the bmmc combinator in nested parallel code. In fact for any BMMC (A, c) of size n and any mask m we can find another BMMC (A', c') of size n+1 such that :
m ( (A, c)) = (A', c')
The main contribution of this paper is to give an efficient implementation for a class of BMMC permutations that we call tiled BMMC permutations. These include all BPC permutations, such as transpose and bit reverse. We show how to generalize the matrix transposition kernel to tiled BMMCs and compare the impact of different optimizations in sections <ref> and <ref>. The final kernels we obtain are fully coalesced and bank-conflict free, reaching on average 90% of the maximum effective memory bandwidth.
Finally, we show in section <ref> how to use linear algebra techniques to decompose any BMMC A as a product A = T_1 T_2 of two tiled BMMCs. The permutation defined by A can then be efficiently realized by first applying the kernel for T_2 followed by the kernel for T_1.
It should be noted that we assume an offline setting, i.e. that the BMMC matrix and complement vector are known in advance (before generating the CUDA code for the kernel). This is in accordance with our aim to implement the techniques described in this paper in the Futhark compiler.
§ IMPLEMENTING BPC PERMUTATIONS
In this section, we explain how to generalize the transpose kernel from section <ref> to arbitrary BPC permutations. We start by introducing simple tiling to enable coalesced memory access before gradually adding further optimizations. As a running example the reader can inspect the different kernels generated for the bit-reverse permutation in the appendix.
§.§ Ensuring coalesced accesses
The first step is to define the notion of tile for an arbitrary BPC (p, c), where p is a permutation on {0, …, n-1} and c is a complement vector. We start by partitioning the bits of input indices as follows :
* The tile column bits are the n_tile lower bits.
* The tile row bits are the n_tile bits such that
p(bit_index) < n_tile.
* The tile overlap bits are the n_over bits that are both tile column and tile row bits.
* The thread block bits are the n_TB remaining bits.
See Figure <ref> for an illustration. In our implementation, we choose n_tile to be equal to the logarithm of the warp size : n_tile = 5.
We also define some notation for dealing with indices :
* forms an index by using for the n_tile tile column bits, for the n_TB thread block bits and for the n_tile - n_over remaining tile row bits.
* forms an index by using for the n_tile tile row bits, for the n_TB thread block bits and for the n_tile - n_over remaining tile column bits.
* forms an index as in , but deletes the thread block bits.
* forms an index as in , but deletes the thread block bits.
We show some examples of using these functions for the cyclic shift permutation of Figure <ref>. This permutation shifts the bits of the input index by one position towards the LSB and moves the LSB to the MSB, so that :
(011001, 5) = 011100
In these examples n = 10 and n_tile = 5 (thus n_over = 4 and n_TB = 4), and we follow the usual convention for binary literals of writing the LSB to the right and MSB to the left :
[commandchars=
{}]
stitch_col(11010, 1100, 1) = 1100111010
stitch_tile_col(11010, 1) = 111010
stitch_row(0, 1011, 00110) = 1011001100
stitch_tile_row(0, 00110) = 001100
Note that when n_over = 0, and are identical, and and are also identical. We refer the reader to the appendix for some intuition on how these stitching functions are translated to CUDA instructions.
Fixing the thread block bits and choosing every possible combination of tile bits defines a single tile : the input array is thus covered by 2^n_TB disjoint tiles. As in the transposition case, we launch one thread block per tile, each of size 2^n_tile * 2^n_tile - n_over.
This kernel uses only coalesced memory accesses. We can easily see that when reading the input tile each warp reads 2^n_tile consecutive elements. This is less clear when writing the output tile. Notice that using p to permute the bits of moves the bits of to the n_tile lower bits of the index : each warp thus writes 2^n_tile consecutive elements.
§.§ Avoiding bank conflicts
The previous kernel solved the coalescing problem, but unfortunately it introduced shared memory bank conflicts, specifically in the second access to the tile in shared memory.
At this point there are two natural ways of viewing the two-dimensional tile in shared memory : we could view it as a 2^n_tile * 2^n_tile - n_over matrix, or as a 2^n_tile-n_over * 2^n_tile matrix. We choose the latter option as it yields an easier analysis of bank conflicts. Note that the tile is in general not square : it can have fewer rows than columns.
We can now analyse both accesses to the tile using this new lens (see Figure <ref> for an illustration) :
* In the first access each warp writes a single row in the tile. Since we chose 2^n_tile=32 this is always bank conflict-free.
* In the second access each warp reads 2^n_over distinct columns from the tile : in particular when n_over = 0 each warp reads a single column. Note that the accessed columns are not necessarily evenly spaced.
Accessing a matrix column-wise in shared memory results in a bank conflict. In this case, the second access is serialized into 2^n_tile - n_over conflict-free reads, one for each row.
To fix this conflict we change slightly the way the tile matrix is stored in shared memory : we shift each row by a given amount to the right. Elements that overflow the end of the row wrap around to the start of the row. More formally, the element at row i and column j is stored at index :
i * 2^n_tile + ( + j 2^n_tile)
We choose the shift for each row depending on the permutation p, but note that no matter how we choose the shifts the first access to the tile will always remain conflict-free. We make the following choice :
= (i, 0)
For instance when n_over = 0 we have = i. We can now analyse the second access again. Each thread accesses the shared memory tile at position (i, j) where :
i = (, ) / 2^n_tile
= / 2^n_over
j = (, ) 2^n_tile
= (, 2^n_over)
This element is in the following bank (modulo 2^n_tile) :
(, )
= + j
= (i, 0) + j
= [t]
(/ 2^n_over, 0) +
(, 2^n_over)
= [t]
(, 0) +
(/ 2^n_over, 2^n_over)
The final call to is a bit permutation of . This means that in the second access each warp accesses every bank once. The resulting kernel is fully conflict-free.
§.§ Amortizing index computations
The running time of the transpose kernel shown in the introduction is almost completely spent on memory operations. This is not the case for more complex permutations (for instance when n_over > 0 or when the tile row bits are not contiguous); the scalar instructions performed by each thread to stitch the bits of input and output indices account for a non-negligeable portion of the running time. We can reduce this overhead by having each thread process 2^n_iter input indices instead of only one (typically n_iter = 3).
We modify the partition of input index bits by splitting the thread block bits into two parts : the lower n_iter bits become the iteration bits and the upper bits become the new thread block bits (see Figure <ref> for an illustration). The and functions are modified accordingly.
Each thread block processes 2^n_iter tiles : it reads the tiles sequentially, synchronizes the threads, and writes the tiles sequentially. For instance the read step becomes :
The advantage of writing the kernel this way is that most index computations can be pulled out of the loop. Only the parts that depend on need remain in the loop (see the appendix for an example). The average amount of scalar instructions per input element is thus greatly reduced.
§ IMPLEMENTING BMMC PERMUTATIONS
§.§ Tiled BMMCs
It is straightforward to extend the kernels developed in the previous section to a class of BMMCs slightly larger than BPCs, namely tiled BMMCs. A tiled BMMC (A, c) is a BMMC corresponding to a permutation that can be implemented using the tiled kernel outlined above. The minimal requirements on the matrix A are that we can find a set of columns i_1, …, i_n_tile such that :
* The sub-matrix formed by the first n_tile rows and the columns i_1, …, i_n_tile is invertible.
* The sub-matrix formed by the last n - n_tile rows and the columns i_1, …, i_n_tile is equal to 0.
See Figure <ref> for an illustration. Note that a BPC is always a tiled BMMC; in this case the columns i_1, …, i_n_tile are exactly the indices of the tile row bits.
When implementing the tiled kernel, the bits of each input index are now partitioned in the following way :
* The tile column bits are the n_tile lower bits.
* The tile row bits are the n_tile bits i_1, …, i_n_tile.
The tile overlap bits and thread block bits are defined as previously. The only modification we have to make to the kernel is to change the calculation of the output address
to use a matrix multiplication instead :
Bank conflicts can now be eliminated in the same way as for BPC permutations. However, the next optimization (amortizing the cost of index computations) cannot be applied to tiled BMMC permutations, as it relies on the sparseness of BPC matrices. We show the exact performance impact in section <ref>.
§.§ Factorizing BMMCs into tiled BMMCs
The main use case for tiled BMMCs is to provide an implementation for arbitrary BMMC permutations. Using the Lower-Upper (LU) decomposition we show that any BMMC can be factorized into a product of at most two tiled BMMCs.
Let (A, c) be a BMMC. There exist matrices U, L and P such that :
A = U L P
where U is an upper triangular matrix, L is a lower triangular matrix and P is a permutation matrix.
Observe that U is the matrix of a tiled BMMC (using the first n_tile columns) and P is the matrix of a BPC, but L has no such property. We can factorize A in a slightly different way, using the matrix R corresponding to bit-reverse (see section <ref>) such that R_ij = 1 exactly when i + j = n-1 (thus R^2 is the identity matrix) :
A = (U R) (R L P)
Both factors in this new decomposition are matrices of tiled BMMCs (see Figure <ref>) :
* U R using the columns n-n_tile, …, n-2, n-1.
* R L P using the columns p(n-n_tile), …, p(n-2), p(n-1).
The permutation defined by (A, c) can thus be realized by first permuting using (RLP, 0) and then using (UR, c).
§ RESULTS
We implemented the kernels outlined above in CUDA : we use Haskell to generate a CUDA kernel for each permutation. We refer the reader to the appendix for an example of the naive and various optimized BPC permutation kernels.
We used CUDA events to measure the running time of each kernel on a NVIDIA RTX4090 GPU and averaged each measurement across 1000 runs. Unless otherwise noted, all arrays contain 32-bit elements. We report the impact of different optimizations in Figure <ref> :
* The tile optimization refers to the tiling optimization described in section <ref>.
* The banks optimization refers to the shared memory bank conflict optimization described in section <ref>.
* The iters optimization refers to the iteration optimization described in section <ref>. As explained at the end of section <ref> this is only applicable to BPC permutations, not to tiled or arbitrary BMMC permutations.
The tile optimization yields the largest speedup. For the other two optimizations, we report only the additional speedup when they are added to tile.
Our optimized BPC permutation (tile + banks + iters) is about as fast as a simple copy, whereas our optimized BMMC permutation (tile + banks) is about half as fast as a simple copy. This is because a BMMC permutation is implemented as two tiled kernels and thus does twice the work of a BPC permutation which is implemented as a single tiled kernel. The cost of the binary matrix-vector product performed by each thread in the tiled BMMC kernel accounts for only a few percent of the total running time.
The first column (corresponding to the naive kernels) deserves some explanation. On average, a BPC permutation is much faster than the worst case (corresponding to bit-reversal). This is because a random BPC permutation is likely to have n_over > 0, which means that with the naive kernel each warp writes to only 16 (when n_over = 1) or even 8 (when n_over = 2) global memory segments instead of 32 in the worst case : the naive kernel is already somewhat coalesced. On the contrary, when choosing a random BMMC permutation and factorizing it as in section <ref>, the resulting tiled BMMC permutations almost always have n_over equal to 0, meaning that with the naive kernel each warp writes to 32 global memory segments.
Figure <ref> shows that our kernels are close to optimal in terms of memory bandwidth : the optimized BPC and BMMC permutations reach respectively 92% and 86% of the maximum effective bandwidth. Note that memory bandwidth is a measure of how well a memory-bound GPU program uses the memory system and does not directly reflect the program's running time, as the latter also depends on how much data is transferred to and from memory. Recall that the BMMC implementation does twice as much memory transfers as the BPC implementation, which explains why the last two columns are similar although BMMC permutations are twice as slow.
Figure <ref> shows the speedup we obtain using all optimizations compared to the naive version for different array sizes. Compared to Figure <ref>, for arrays of size smaller than 2^24 we get a lower speedup in the random BMMC and bit-reverse case but a higher speedup in the random BPC case (in all cases the speedup is greater than 1). We do not report data for arrays of size smaller than 2^20 :
* For arrays of size smaller than 2^20, the running time of permutation kernels - both naive and optimized - is only a couple microseconds, which is very close to the GPU clock precision (half a microsecond according to the CUDA Runtime API <cit.>, section 6.5 "Event Management").
* GPUs need a very large amount of threads to be saturated, i.e. to be able to hide global memory latency by switching threads. This is not anymore the case when permuting a single small array : for instance with the optimized BPC permutation kernel and an array of size 2^18 we would launch 2^15 = 32768 threads, which is not enough to saturate the RTX4090 GPU used for benchmarking.
Our current approach for implementing BMMC permutations does have several limitations. We elaborate on the main ones here. Array sizes are restricted to powers of 2 : we have not yet found a satisfactory way to extend our results to arrays of arbitrary size. We also work in an offline setting, i.e. we assume that the BMMC matrix and complement vector are known at compile time. Extending our approach to work in an online setting would raise some difficulties :
* The decomposition of a BMMC matrix into a product of tiled BMMCs can be a costly operation for large arrays, and is poorly suited to GPUs.
* Implementing the bit-stitching functions used in section <ref> in an online setting could lead to slowdown due to the additional scalar instructions we would have to generate. While this might not be an issue for BPC permutations since we can use the optimization outlined in section <ref> to alleviate the cost of scalar instructions, this would certainly result in at least a minor slowdown for arbitrary tiled BMMC permutations.
All the measurements in this article were performed on a NVIDIA RTX4090 GPU. We could not reproduce them on an AMD GPU : we ran into some unexpected slowdowns related to global memory. Despite being fully coalesced, the running time of our tiled permutation kernels depended heavily on the given BPC or BMMC matrix. This can be reproduced even with a kernel as simple as a tiled transpose : see Figure <ref> for an example using the Futhark transpose kernel. This phenomenon only occurs when array sizes are powers of two, and as such is not an issue for most Futhark programs, but is an issue for the algorithms in this paper.
There seem to be differences in the memory architecture between AMD and NVIDIA. Our guess is that they have a different address mapping scheme and that our kernels trigger global memory bank conflicts on AMD cards, however we have not been able to prove or disprove this intuition and are open to suggestions. We refer the reader to <cit.> for a discussion on GPU address mapping schemes that coincidentally makes use of BMMCs.
§ APPLICATION TO THE PARM COMBINATOR
§.§ Using the parm combinator
As a use case of BMMC permutations we describe how they can be used to implement a high level combinator called . This is not the only useful combinator that is related to BMMCs : other examples are outside the scope of this paper, but we do plan on studying these combinators further in future work. We refer the reader to <cit.> for another paper using similar combinators.
Let us remind how the combinator works : it takes as input an array xs of size 2^n, an n-bit binary mask and a function f that maps arrays of size 2^n-1 to arrays of size 2^n-1. The input array xs is partitioned into two sub-arrays xs0 and xs1 depending on the mask as follows (see Figures <ref> and <ref>) :
() = * = 0
* = 1
Where i is the index of the given element in xs and denotes the dot product in F_2. We then apply f to each sub-array and stitch them back together in exactly the same way.
We now show how to use to implement a simple sorting network, inspired by Batcher's bitonic sorting network <cit.> and the balanced periodic merger <cit.>. There has been previous effort to generate efficient GPU code for such networks : see <cit.> for an approach that focusses on small networks operating on arrays that fit in shared memory.
The network we study in this example is a variant of merge sort: the elements at even and odd indices are sorted separately before being merged. The following function sorts its input of size 2^n :
The merge function takes as input an array in which the two sub-arrays formed by the elements at even and odd indices are sorted and produces a sorted output. We choose to use a balanced periodic merger : Figure <ref> illustrates the merging network. Data flows from left to right along the 16 horizontal lines. The vertical lines operate on two inputs and place the minimum on the top and the maximum on the bottom. Here is the corresponding pseudocode :
The function in turn builds a single V-shaped column with 2^n inputs in the merging network. This can be accomplished by simply interleaving two half-size V-columns using a mask equal to 3 = 011 (see also Figure <ref>).
The combinator shines here because it allows the programmer to specify the sorting network in a declarative style, leaving many opportunities for the compiler to optimize the program (in this case using BMMCs to permute arrays and obtain coalesced memory accesses).
§.§ Compiling parm using BMMC permutations
While the above example shows the expressiveness of , a straightforward implementation - in which the function f we apply to each sub-array reads its inputs directly from and writes directly to the output array - is not suited to GPUs. To gain some intuition on why consider the case where f makes only fully coalesced reads and writes. For most masks (think for instance of ) the resulting function will not make fully coalesced accesses, and in fact will require twice as much memory transactions as a coalesced version would. Now take into account that is often nested many times (as in the sorting network example) and we loose all coalescing.
Our solution for compiling while retaining coalescing is to first permute the array such that the two subarrays and form the first and second half of the resulting array, apply f to each half and then permute the array back. When applying f, the two sub-arrays are contiguous in memory : any coalescing behaviour of f will therefore be retained. Permuting the array twice (before and after applying f) of course adds some overhead : however these permutations are in fact BMMC permutations, allowing for an efficient implementation.
We now explain how to construct a matrix A such that :
m f = (A^-1, 0) ∘ 2^n-1 f ∘ (A, 0)
Permuting using the BMMC (A, 0) should put into the first half and into the second half, while preserving the order of elements within each sub-array. More formally, an element at index x in should have the index y in the result such that :
y_0..n-2 = (x)
y_n-1 = (x)
Where is equal to 0 if x is in the first sub-array and 1 otherwise, and is the new index of the element at position x in its sub-array (see Figure <ref> for an example).
Notice that is simply equal to i *. Finding an expression for is slightly harder. It turns out that it is sufficient to remove the bit at index from x, where is the index of the least significant bit of the mask. The reader is invited to check this fact in Figure <ref>. This yields the following relation between x and y from which it is straightforward to construct the matrix A (a similar formula can be derived for A^-1) :
y_i =
x_i i <
x_i+1 ≤ i < n-1
x * i = n-1
It should be noted that and give rise to a rich set of rewrite rules that allow us to reduce the number of BMMC permutations performed in most cases, especially when nesting applications of .
§ RELATED WORK
BMMCs were first studied by Cormen in the setting of the parallel disk I/O model introduced by Vitter and Schriver <cit.>. This model consists in a processor (or multiprocessor) connected to several storage devices which can be accessed in parallel, and places an emphasis on the memory system rather than on the processor. Performance in this model is measured in terms of I/O accesses. Cormen showed how to perform BMMC and BPC permutations for large on-disk arrays and proved optimality results for his implementations in terms of number of memory accesses <cit.>. This inspired our current work, which tackles the same problem but in the context of GPUs, for which memory access performance is just as important as in the context of parallel disk I/O.
There have been previous attempts at performing permutations efficiently on GPUs : Kasagi et al. <cit.> show how to implement arbitrary permutations (also in an offline setting) in a fully coalesced and bank-conflict free manner, and additionally provide specialized kernels for specific permutations such as transpose or bit-reverse. Their method has similar theoretical guarantees in terms of bandwidth as ours, but they use 5 kernels per permutation whereas we use only one and two kernels for BPC and BMMC permutations respectively. The result is that while our permutations reach roughly 50% (for BMMCs) and 100% (for BPCs) of the speed of a copy, their fastest algorithm is 5 times slower than a copy. Kasagi's method is additionally limited by shared memory size : for an input array of N elements, it requires that √(N) elements can fit in shared memory, which is typically only a few kilobytes on modern GPUs. Their method can thus only handle input arrays of up to roughly 2^24 32-bit elements.
More recently, BMMCs have been used to design GPU address mapping schemes <cit.>. To the programmer, GPU global memory is presented as a single contiguous block of memory. The translation between a memory address and actual hardware parameters (involving a bank index, channel index and so on) is handled by a so-called address mapping scheme. Liu et al. represent this mapping using a BMMC mapping : in essence, they implement a fixed BMMC permutation directly in GPU hardware.
§ CONCLUSIONS
We have shown an efficient CUDA implementation of BMMC permutations, a class that includes many interesting permutations. The benchmark results are promising, especially for BPC permutations which are basically as fast as they can get, reaching upwards of 90% of the maximum effective bandwidth.
We also explained how inserting BMMC permutations in GPU code at the right places can allow for fully coalesced memory accesses. In some sense, this generalizes an optimization present in the Futhark compiler in which multidimensional arrays are automatically transposed in memory to create opportunities for coalescing when possible (<cit.> section 5.2 "Optimizing Locality of Reference"). In both cases this does create a tradeoff between the speedup from coalescing and the slowdown from executing additional permutations. Our aim moving forward is to implement and several related combinators in the Futhark compiler to measure the net gains of this tradeoff. These combinators come with a rich fusion algebra which should permit further optimizations.
§ ACKNOWLEDGEMENT
This research was funded by a Swedish Research Council grant "An algebra of array combinators and its applications", proj. no. 2021-05491. We would also like to thank Troels Henriksen for providing the data for Figure <ref>.
§ APPENDIX : GENERATED CUDA KERNELS
This appendix shows the complete CUDA kernels generated for the bit-reverse permutation. For all kernels in this section, the parameters are as follows :
n = 15 n_tile = 5 n_over = 0
The number of scalar instructions in these kernels might be higher than expected : we deliberately do not use CUDA intrinsic functions such as to speed up index computations as this approach would not work for arbitrary bit permutations. We do however perform a simple optimization to reduce the instruction count. When setting bits i_0 < … < i_k in a destination variable using bits j_0 < … < j_k respectively in an input variable, if the offsets i_1 - i_0, …, i_k - i_k-1 are equal to the offsets j_1 - j_0, …, j_k - j_k-1, we set all the bits in a single operation (corresponding to a single line in the kernels below). We measured the impact of this optimization and found that on average it reduced by 50% the number of scalar instructions that were generated.
Here is the naive kernel with no tiling :
Here is the tiled kernel :
Here is the tiled kernel, bank-conflict free :
Here is the tiled kernel, using iterations (but susceptible to bank conflicts). We choose n_iter = 3 for this example :
|
http://arxiv.org/abs/2306.10934v2
|
20230619134737
|
Is the diagonal case a general picture for Loop Quantum Cosmology?
|
[
"Matteo Bruno",
"Giovanni Montani"
] |
gr-qc
|
[
"gr-qc",
"quant-ph"
] |
[email protected]
Physics Department, Sapienza University of Rome, P.le A. Moro 5, 00185 Roma, Italy
[email protected]
ENEA, C.R. Frascati (Rome), Italy Via E. Fermi 45, 00044 Frascati (Roma), Italy
Physics Department, Sapienza University of Rome, P.le A. Moro 5, 00185 Roma, Italy
The correct implementation of the Loop Quantum Gravity to the early homogeneous Universe has been the subject of a long debate in the literature because the SU(2) symmetry cannot be properly retained. The role of this symmetry is expressed by the Gauss constraint. Here, a non-vanishing Gauss constraint is found. However, we show that using suitable variables, it can be recast into three Abelian constraints, justifying the absence of such a symmetry in Loop Quantum Cosmology.
Is the diagonal case a general picture for Loop Quantum Cosmology?
Giovanni Montani
July 31, 2023
==================================================================
§ INTRODUCTION
The most promising proposal to quantize the gravitational field is, till now, the so-called Loop Quantum Gravity <cit.>. This claim is based on the idea that such a proposal, starting from a classical formulation of General Relativity, which is (on shell) equivalent to the Einstein-Hilbert formulation <cit.>, arrives, via the introduction of the SU(2) symmetry, to describe geometrical operators, like areas and volumes of space, as associated to discrete spectrum <cit.>. As a consequence, the implementation of Loop Quantum Gravity to the cosmological setting led to a Big-Bounce for the primordial Universe <cit.>, due to an anomaly of the classical limit.
The reliability of the so-called Loop Quantum Cosmology has been debated over the years <cit.>, because the symmetry restriction induced by the homogeneity constraint prevents the preservation of the SU(2) symmetry in the classical and quantum formulation. Actually, the its-self implementation of the dynamics for homogeneous models is a step forward in the general formulation, for which a reliable implementation of the regularized scalar constraint <cit.> is not viable <cit.>.
An interesting attempt to restore also in cosmology a gauge SU(2) symmetry, together with the associated Gauss constraint, has been formulated in Refs. <cit.>. There, a kinematical Hilbert space has been constructed by emulating the basic formulation in Loop Quantum Gravity. The idea is that the homogeneity of the space still allows for a local time-dependent Lorentz rotation of the triad vectors, so restoring a non-identically vanishing Gauss constraint as for the original formulation of the Ashtekar School <cit.>.
The present analysis starts from the same theoretical set-up of a local time-dependent gauge transformation of the triad, but, investigating in detail the relation of the Ashtekar-Barbero-Immirzi connection and conjugate momentum to the standard ADM-Hamiltonian variables, it arrives at a rather different conclusion: the resulting picture is closer to the formulation of the Ashtekar School than to real "spin-network” construction <cit.>. When we express the SU(2) gauge connection in terms of the metric variables (three scale factors, three Euler angles and, eventually three gauge angles), a local expansion of the involved functions outlines a linear dependence of the Gauss constraint from the three momenta variables, associated to the gauge angles (i.e. those responsible for the local Lorentz rotation). This result suggests pursuing, ab initio a Holst formulation <cit.>, by expressing the SU(2) connection in terms of the
metric variables. This calculus strategy provides the net and relevant issue of a linear relationship between the three Gauss constraint components and the three null momenta of the gauge angles: the Gauss constraint validity is ensured by the simultaneous vanishing behaviour of the three momenta and vice versa.
Particularly, we demonstrate that the Gauss constraints can be suitably restated into three Abelian constraints, simply stating the gauge nature of the three angles which rotates the dreibein. The explicit expression of the matrix linking the two sets of constraints is provided here.
Finally, the most important consequence of the present study is that the physical kinematical states of the theory cannot depend on the three gauge angles (simply because in a canonical formulation they are annihilated by the three null momenta) so that the quantization of the model reduces to the analysis provided in Ref. <cit.> on the non-diagonal Bianchi models. In other words, the present study allows validation of the original idea that the space of the almost-periodic functions is the suitable approach to implement a canonical Loop Quantum Gravity in cosmology. Even if we start with all the nine non-zero triad components, three of them are actually gauge angles, leading to a Gauss constraint that is reducible to three Abelian vanishing momenta. The quantization coincides with that one of a non-diagonal Bianchi Universe, which in Ref. <cit.> was associated with a diagonal representation of the fluxes, in agreement with the analysis in Refs. <cit.>.
§ ROTATIONS AS GAUGE TRANSFORMATIONS
We recall the classical description of Ashtekar variables in a homogeneous Universe. In a homogeneous model, the space-time is a manifold ℳ≅ℝ×Σ, where Σ is a three-dimensional Riemannian homogeneous space. We require that the isometry group S of Σ acts transitively and freely <cit.>.
On Σ exists a basis of left-invariant one-forms ω^I (i.e. F^*ω^I=ω^I, ∀ F∈ S) such that
dω^I+1/2f^I_JKω^J∧ω^K=0.
The dual vector fields ξ_I (defined by ω^I(ξ_J)=δ^I_J) are the generators of the Lie algebra 𝔰 of S
[ξ_I,ξ_J]=f^K_IJξ_K,
thus, f^K_IJ are the structure constants.
The induced Riemannian metric h on Σ is left-invariant due to the homogeneous hypothesis, hence, it can be written in terms of ω^I
h=η_IJω^I⊗ω^J,
where η_IJ is a symmetric matrix constant on Σ.
A homogeneous connection A on Σ is determined by a linear map ϕ:𝔰→𝔰𝔲(2) and it is written as A=ϕ∘θ_MC, where θ_MC=ξ_I⊗ω^I is the Maurer-Cartan form <cit.>.
Using a coordinate system (t,x^i) adapted to the space-time decomposition, the components of the left-invariant one-forms ω^I_i and the dual vector fields ξ_I^i depend only on x^i, while the other quantities that are constant on Σ are functions on t. Thus, the Ashtekar variables read <cit.>:
A^a_i(t,x)=ϕ^a_I(t)ω^I_i(x), E^i_a(t,x)=|det(ω^J_j(x))|p^I_a(t)ξ^i_I(x).
We can also characterize the space-time metric g via its component: g_00=-N^2+N^iN^jh_ij, g_0i=N^jh_ij, g_ij=h_ij, where N and N^i are the lapse function and the shift vector, respectively, and h is the induced Riemannian metric h_ij=η_IJ(t)ω^I_i(x)ω^J_j(x). In a homogeneous model, the lapse function is a function of time only N=N(t), while the shift vector can be factorized as N^i=N^I(t)ξ^i_I(x).
Now, we are interested in the gauge freedom of the Ashtekar variables. The gauge transformation for the densitized triads is known p^I_a↦ p^I_b O^b_a, with O∈ SO(3) <cit.>.
Due to the homogeneity hypothesis, p^I_a only depends on time and this property must hold also after the gauge transformation. Hence, although O can be arbitrary and does not contribute in any physical sense, it must depend on time only too.
Moreover, the gauge transformation can be seen as a rotation of the dreibein e^i_a↦ O^b_ae^i_b. This interpretation allows us to find the associated gauge transformation of the connection variables ϕ^a_I.
Consider the usual expression of the Ashtekar connection A^a_i=Γ^a_i+γ K^a_i where γ is the Barbero-Immirzi parameter. We can treat the two terms separately. The second term K^a_i=K_ije^aj contains the external curvature K_ij which is a geometrical quantity and is not affected by gauge transformations, while e^ja is a dreibein vector, so it rotates under a gauge transformation. It is easy to check that the rotation matrix is the inverse of the transformation matrix that acts on e^i_a because δ^i_j=e^i_ae^a_j must be invariant. Then, K_ije^ja ↦ K_ij(O^-1)^a_b e^jb.
Moreover, also the spin part transforms as Γ^a_i↦(O^-1)^a_bΓ^b_i. Thus, under a gauge transformation, a matrix rotation appears:
A^a_i=ϕ^a_Iω^I_i↦ (O^-1)^a_bA^b_i=(O^-1)^a_bϕ^b_Iω^I_i.
Therefore, on the phase space (ϕ^a_I,p^J_b) the gauge transformation acts as
p^I_a↦ p^I_b O^b_a , ϕ^a_I↦ (O^t)^a_bϕ^b_I.
We can check that such a transformation leaves the Gauss constraint weakly vanishes. In fact, the transformation of the Gauss constraint G_a=ϵ_ab^ cϕ^b_I p^I_c reads
G_a↦ϵ_ab^ c(O^t)^b_d O^e_cϕ^d_I p^I_e=ϵ_bd^ eO^b_aϕ^d_I p^I_e=O^b_a G_b≈ 0
Now, we look for a description in metric variables like the ones in Ref. <cit.>. The new phase space of the metric variables, composed of the three scale factors a,b,c and the three Euler angles of the physics rotation θ,ψ,φ, needs to include variables of the gauge freedom.
Since O∈ SO(3), it can be written in terms of Euler angles
O=exp(α j_3)exp(β j_2)exp(γ j_3),
where j_i are the real matrix generators of SO(3). Then, the three gauge variables are these three Euler angles (α,β,γ), they are seen as a chart on SO(3), α,γ∈(0,2π), β∈(0,π). Hence, the new configuration coordinates are {a,b,c,θ,ψ,φ,α,β,γ}.
In order to construct a theory in which the Gauss constraint does not vanish identically and in which the role of the cosmological quantities is made explicit, the assumption of phase space with configuration variables {a,b,c,θ,ψ,φ,α,β,γ} seems to be a reasonable solution. The idea is to impose a canonical transformation between the two phase spaces such that the conjugate momenta to the gauge variables are included in the expression of the Ashtekar variables. These momenta will play a role in the Gauss constraint and they can be removed from the theory to recover the expressions in Ref. <cit.>.
§ EXAMINATION OF THE BIANCHI I MODEL
For the non-diagonal Bianchi I model, we have a simple expression of Ashtekar variables in terms of metric variables which allows us to do some computations. We want to use the connection and fluxes expression in Eqs. (39) and (42) from Ref. <cit.> properly gauge rotated
ϕ^a_I=γ/2Na_bΛ^J_b η̇_JI(O^t)^a_b ,
p^I_a=a_ba_cΛ^I_dO^d_a with ϵ_abc=1 ,
where Λ is the physical rotation, and a_1,a_2,a_3 are the scale factors.
A direct computation shows that, despite the gauge freedom, the Gauss constraint identically vanishes (as well as in Ref. <cit.>). Thus, the gauge momenta play a fundamental role in a non-vanishing Gauss constraint description. Now, we want to analyze this aspect.
§.§ A Lagrangian approach
We want to investigate what happens to the Hamiltonian formulation starting from the Hols action <cit.>
𝒮_H=c^3/8π G γ∫ dt d^3x(E^i_aȦ^a_i+λ^aG_a-N^i𝒱_i-N/2γ𝒮),
and considering the connection and the dreibein with respect to the metric variables (<ref>,<ref>). Here, λ^a are Lagrange multipliers and
G_a=ϵ_ab^ cϕ^b_I p^I_c , 𝒟_I=G_bϕ^b_I,
𝒮=-1/γ^2|det(p^K_c)|(p^I_aϕ^a_I p^J_bϕ^b_J-p^I_aϕ^a_J p^J_bϕ^b_I),
are the Gauss, Diffeomorphism and scalar constraints, respectively.
We recall that ϕ^a_I is computed from the usual expression of the connection A^a_i=Γ^a_i+γ K^a_i, while p^I_a has the geometrical meaning as the homogeneous part of the dreibein vectors. We want the Holst action to be explicit in terms of metric variables, the calculation of the single terms provides that the Gauss constraint vanishes G_a=0, and so the Diffeomorphism constraint 𝒱_i=0, while the scalar constraint has the same expression as in Eq. (46) from Ref <cit.>.
As we expect, the gauge freedom does not appear in the Lagrangian that is invariant under gauge transformation. Hence, the momenta can be computed and the gauge momenta are null (i.e. p_α=0, p_β=0, p_γ=0), while the others are the same presented in Ref. <cit.>. We can now perform the Legendre transformation with Lagrangian multipliers λ_i to find the Hamiltonian
H=c^3/8π G(λ_1p_α+λ_2 p_β+λ_3 p_γ-N/2𝒮).
Thus, the theory of the non-diagonal Bianchi I model in metric variables is a constrained Hamiltonian theory with phase space
(a,b,c,θ,ψ,φ,α,β,γ,p_a,p_b,p_c,p_θ,p_ψ,p_φ,p_α,p_β,p_γ)
with four constraints
p_α≈0 , p_β≈0 , p_γ≈0 , 𝒮≈0
and with a Hamiltonian which is a linear combination of such constraints.
Notice that the same theory written in terms of connection and dreibein (ϕ^a_I,p^J_b) has four constraints given by G_a≈0 and 𝒮≈0.
The scalar constraint is the same in both formulations in the sense that it is possible to switch from one to the other using transformation (<ref>). This property does not hold for the Gauss constraint, which also in gauge variables vanishes. However, it is replaced by the three constraints on the pure gauge momenta.
We interpret this result as follows: the Gauss constraint after the canonical transformation becomes the gauge momenta constraint. In such a way, the dependence on the gauge momenta we introduce in the Ashtekar variables vanishes on the constraints' hypersurface, recovering the usual description.
§ EQUIVALENCE BETWEEN GAUSS CONSTRAINT AND PURE GAUGE MOMENTA
In this Section, we want to find an explicit expression for the Gauss constraint. Previously, we showed that there exists a relation between the Gauss constraint and the momenta constraint, which we interpreted as
G_a=0 p_g=0
where g∈{α,β,γ}.
Such a relation is satisfied if the Gauss constraint is linearly dependent on pure gauge momenta p_g only. For simplicity, it will be our ansatz. Thus, we enunciate the following conjecture
The Gauss constraint depends on the gauge momenta via a 3×3 matrix L_ag:
G_a=L_agp_g,
with a is SU(2) internal index and g∈{α,β,γ}.
Using this ansatz, we can explicitly compute the coefficients of the linear combination without using M, nor an explicit expression of ϕ^a_I. Let p^I_a as in Eq. (<ref>) and the gauge momenta p_g given by the transformation that satisfies the Lie condition (i.e. p_gdq_g=ϕ^a_I d p^I_a).
With this assumption, the Gauss constraint reads
G_a =ϵ_ab^ cϕ^b_I p^I_c=ϕ^b_I (p^ph)^I_dϵ_ab^ cO^d_c
=L_agp_g=L_agϕ^b_I∂ p^I_b/∂ q_g=ϕ^b_I (p^ph)^I_dL_ag∂ O^d_b/∂ q_g.
Where (p^ph)^I_d is the physical part of the dreibein (i.e. the one not gauge rotated).
From this, we can derive the following equation
ϵ_ab^ c=L_ag(O^t)^c_d∂ O^d_b/∂ q_g.
This equation is enough to fully characterize the matrix L_ag, in fact, the RHS has the same skew-symmetric property as the Levi-Civita symbol. Therefore, we obtain nine linear independent equations. The equations' system has nine equations in nine variables and the associated determinant is sin^3β, so, it is non-degenerate. Hence, it exists one and only one solution. The solution L_ag can be found easily and it reads
L_ag=[ -βcosγ sinγ βcosγ; βsinγ cosγ -βsinγ; 0 0 1; ]
Finally, the Gauss constraint can be written explicitly in terms of gauge momenta
G_a=([ -βcosγ p_α+ sinγ p_β+ βcosγ p_γ; βsinγ p_α+ cosγ p_β -βsinγ p_γ; p_γ ])
The conjecture enables us to find the explicit dependence of the Gauss constraint on the gauge momenta. The matrix of coefficients is invertible since its determinant is det(L_ag)=-β, then the equivalence condition (<ref>) holds.
This explains the vanishing Gauss constraint in our initial approach, and in general, in the similar approaches of Loop Quantum Cosmology. In fact, the connection A^a_i=Γ^a_i+γ K^a_i is reduced and, when it is described in terms of metric variables, it results in a function defined on the constraints' hypersurface, as well as the dreibein, and so, the linear dependence (<ref>) implies that the Gauss constraint computed from such a connection must vanish.
§ GAUSS CONSTRAINT AS THE GENERATOR OF GAUGE TRANSFORMATIONS
It is well known that the Gauss constraint G_a is the generator of the gauge transformations on the phase space (ϕ^a_I,p^J_b) <cit.>. This feature should hold in the new variables. Therefore, we want to compute the canonical Poisson brackets with respect to {α,β,γ,p_α,p_β,p_γ} of the Gauss constraint in (<ref>). We obtain
{G_a,G_b}=-ϵ_abcG_c.
The sign is not relevant. We expect that this formulation comes out from a canonical transformation in which the connection and dreibein are switched in the phase space, and then a sign in the Poisson bracket appears.
Hence, the Gauss constraint respects the 𝔰𝔲(2)-Lie algebra and generates the SU(2) gauge transformations. Furthermore, it is linear in gauge momenta, so the hypersurface defined by G_a=0 is also described by p_α=p_β=p_γ=0. Thus, the Gauss constraint is equivalent to three constraints on the momenta. Consequently, the generators of the gauge transformation can be decomposed into three generators which commute each other
{p_α,p_β}={p_α,p_γ}={p_β,p_γ}=0.
This decomposition is particularly useful in the simplification of the implementation of the Gauss constraint in a quantum theory.
§.§ Quantum Gauss constraint
In Ref. <cit.> is shown that the quantization of the non-diagonal Bianchi I model can be done in diagonal fluxes and angles variables. It is reasonable that a similar quantization can be provided for the other non-diagonal models, given a loop quantization of homogeneous Universes. However, to complete the description in the loop framework, we need to include the gauge transformations and a non-vanishing Gauss constraint.
Supposing that we have a quantization like in the non-diagonal Bianchi I case, it is enough to add the gauge variables to the phase space of the diagonal fluxes and angles. These gauge variables will be the Euler angles of the gauge rotation and they will be quantized independently (as the physical angles <cit.>) via the Schrödinger picture. Thus, the wave functions are Ψ(p_1,p_2,p_3,θ_1,θ_2,θ_3,α,β,γ), where p_1,p_2,p_3 are the diagonal fluxes and θ_1,θ_2,θ_3 are the physical angles.
Moreover, the Hamiltonian (such as the Lagrangian) is independent of the gauge variables, hence the wave function factorizes Ψ=ϕ(α,β,γ)Φ(p_1,p_2,p_3,θ_1,θ_2,θ_3).
On these functions, the action on the Gauss constraint is essentially a first-order derivative, so the imposition of the weak constraint Ĝ_aΨ=0 is equivalent to
-iħ∂Ψ/∂α=0, -iħ∂Ψ/∂β=0, -iħ∂Ψ/∂γ=0.
The solution of this set of equations is trivial: ϕ(α,β,γ)=const. Thus, the Gauss constraint in this Hilbert space imposes the independence of the wave function on the gauge angles. Therefore, the kinematical Hilbert space for the non-diagonal Bianchi I model presented in Ref. <cit.> remains the same also including the gauge transformations.
§ CONCLUDING REMARK
The analysis above deepens the idea proposed in Ref <cit.>, that a non-vanishing Gauss constraint can be restored also in the minisuperspace of a Bianchi model, as soon as the most general for the Ashtekar-Barbero-Immirzi connection is considered.
Actually, we interpret this general formulation in terms of the ADM metric variables. The introduction of gauge variables is responsible for restoring the SU(2) symmetry and ensuring that the corresponding connection has to verify a Gauss constraint. However, the main result we obtained is that the components of such a Gauss constraint are linearly dependent on the three momenta corresponding to the gauge angles. Thus, in terms of metric variables, the SU(2) symmetry reduces to the vanishing behaviour of these three momenta, i.e. it is, de facto reduced to a set of Abelian constraints.
We also clarified how the non-commutative character of the Gauss constraint components is restored via the transformation linking the two representations, associated with the SU(2) generators.
This issue has a deep impact on the Dirac quantization of the model since the three momenta operators, associated with the gauge angles, must annihilate the state function, which is therefore independent of such angles. Hence, our quantization of the model is equivalent to a non-diagonal quantum Bianchi cosmology, as discussed in Ref. <cit.>, especially concerning the kinematical Hilbert space structure. Since in Ref. <cit.>, the quantum picture is associated with a diagonal set of flux variables, plus the three Euler angles expected to be canonically quantized, the present analysis allows us to claim that the quantization of the Bianchi I model, discussed in Ref. <cit.>, see also Refs. <cit.> for a critical revision, is actually a rather general formulation, the only available in a minisuperspace dynamics. In other words, the scale factors associated in a Bianchi cosmology to independent space directions are the most relevant subjects of a Loop Quantum Cosmology quantization procedure and are characterized by an almost-periodic functions representation.
The reason why the minisuperspace SU(2) symmetry can be decomposed on an Abelian symmetry of the phase space kinematics is reliably due to the fact that for the space Ashtekar-Barbero-Immirzi connection, a local Lorentz rotation depending on time only retains a global character. Thus, a genuine SU(2)-formulation in the sense of Loop Quantum Gravity is still forbidden.
unsrt
|
http://arxiv.org/abs/2306.04560v1
|
20230607160721
|
The lifted functional approach to mean field games with common noise
|
[
"Mark Cerenzia",
"Aaron Palmer"
] |
math.OC
|
[
"math.OC",
"math.AP",
"math.PR"
] |
Object Detection with Transformers: A Review
Mark Cerenzia, Aaron Palmer[Electronic contact: ]
=====================================================
We introduce a new path-by-path approach to mean field games with common noise that recovers duality at the pathwise level. We verify this perspective by explicitly solving some difficult examples with linear-quadratic data, including control in the volatility coefficient of the common noise as well as the constraint of partial information. As an application, we establish the celebrated separation principle in the latter context. In pursuing this program, we believe we have made a crucial contribution to clarifying the notion of regular solution in the path dependent PDE literature.
§ INTRODUCTION
This paper offers a new perspective on certain
classes of forward-backward systems
of stochastic partial differential equations
that arise naturally in mean field game theory and the theory of optimal control with partial information.
The systems arising from either of these fields share the following major difficulty:
although
the noise is exogenously given in the forward equation describing the state dynamics,
the noise is endogenously determined in the backward HJB equation characterizing optimality.
We propose a novel path-by-path interpretation that exhibits duality
between the equations of such systems at the pathwise level.
This paper
introduces and verifies this approach through significant examples, some of which
we have not yet found solved explicitly elsewhere in the literature.
Mean field games with common noise have attracted much attention
due to their practical and theoretical interest.
Indeed, it is a natural modeling assumption that all agents in a game are subject to common random shocks in addition to possible individual shocks.
On the other hand, the problem is notoriously
difficult because the corresponding mean field game consistency condition
now features a stochastic equilibrium measure flow that must coincide with the flow of conditional laws of an optimally controlled process given the common noise.
For the PDE approach, the breakthrough work <cit.> of Cardaliaguet-Delarue-Lasry-Lions
interprets the mean field game system with common noise (see the system (<ref>) below)
as the characteristics
for the so-called master equation, a certain PDE on Wasserstein space.
For the probabilistic approach,
Carmona-Delarue <cit.> interpret a similar class of such PDEs on Wasserstein space
as determining decoupling fields for forward-backward systems of stochastic differential equations that characterize mean field equilibria, whether for a probabilistic representation of the value function or of its gradient (the latter being the content of the Pontryagin maximum principle).
Either of these perspectives offers ways of achieving
wellposedness for the mean field game problem in the presence of common noise,
and further can yield explicit solutions for certain data; see
Sections 3.5 and 4.5 of Carmona-Delarue <cit.> for some linear-quadratic examples featuring a common noise.
By contrast,
the topic of control in the volatility coefficient of the
common noise has not been explored much in the mean field game theory literature.
The only paper we have found on the topic is
the recent work of Barasso-Touzi <cit.>;
otherwise, some general expressions and equations in Carmona-Delarue <cit.> account for the possibility
of controlled volatility coefficients, so
the abstract theory still applies insofar as one
can characterize equilibria based on dynamic programming (leading to a system of stochastic PDEs) or based on the Pontrygin maximum principle (leading to an FBSDE).
However, wellposedness results and explicit solutions do not seem to be available yet in the literature.
The topic of optimal control with partial information has a long history and an accordingly large literature.
We refer the reader to the book <cit.> of Bensoussan
and references therein.
Mean field games with common noise and with partial information
seems to be largely unexplored,
even though the recent paper of Bensoussan-Yam <cit.> that motivated our calculations clearly takes
inspiration from these authors' own work on mean field games.
See also the earlier paper of Bandini-Cosso-Fuhrman-Pham <cit.> that approaches the partial information problem (without mean field interactions) using viscosity solutions on Wasserstein space.
The unpublished work of Huang-Wang <cit.> attempts to pursue this problem
via the Pontryagin maximum principle, and although we believe this probabilistic approach can work,
the authors' calculations here do not appear to satisfy the separation principle, a standard litmus test for such a solution. Roughly speaking, this principle says that
to go from the optimal feedback control in the case of full information to the case of partial information,
one just needs to replace the state with the best guess of the state given
the common noise and the partial observation.
A main result of this paper is that the lifted functional approach
can be used to establish this principle for mean field games with common noise and partial information; see
the end of the
final Section <ref>
for the theorem statement and discussion.
One apparent difficulty with the dynamic programming approach to mean field games with partial information is that one must account for both the common and observational noises, so each of these must be endogenously determined
in the stochastic backward HJB equation to ensure non-anticipativity of the value function and optimal feedback control.
Another, more subtle, difficulty that arises here is that the probability measure
with respect to which one formulates a typical player's control problem with a partial information constraint differs
from the probability measure with respect to which one derives and articulates the forward-backward
system of stochastic PDEs; see the system (<ref>) below for how one may handle this issue.
Finally, if one drops the mean field coupling and partial information constraint,
the resulting
backward stochastic HJB equations of the various systems (<ref>), (<ref>), and (<ref>) that we consider
are well-known to be related to so-called path dependent PDEs (see Section 11.3.5 of Zhang <cit.>).
We refer the reader to the early work
of Ekren-Keller-Touzi-Zhang <cit.> and Ekren-Touzi-Zhang <cit.>
for the first accepted notion of viscosity solution for path dependent
PDEs, but otherwise point to
the bibliographical notes of Chapter 11 of Zhang <cit.>.
On the one hand, the main concepts
in this paper were inspired by careful manipulations involving the functional Itô formula
for path dependent functionals (see Dupire
<cit.> and Cont-Fournié
<cit.>).
On the other hand, we do not know of references from the path dependent PDE literature that systematically
explore explicit solutions.
We believe this gap speaks to one of the main benefits of the lifted functional approach
as a complementary perspective on path dependent PDEs, namely,
that it more concretely and quickly emphasizes the connection to classical PDE theory.
To our best knowledge,
such a connection in the same spirit
was only otherwise attempted by Bion-Nadal <cit.>
(see the definition of “regular solution” in Section 2.2 therein), but this work omits the
crucial compensator term, defined in (<ref>) below.
This omission is unfortunately
a significant error;
indeed, consider a simple example, e.g., the path dependent heat equation
with terminal condition G(ω) = ∫_0^T ω_s ds at time T ≥ 0 (see (<ref>) of the appendix).
The correct lifted functional solution here
is well-known to be given by û(t,ω, y) = ∫_0^t ω_s ds + (T - t) y (see Example 11.1.2 of Zhang <cit.>), which is consistent with our compensated heat equation (<ref>) but does not satisfy equation (5) in <cit.>.
However,
our main desire is for the lifted functional perspective to help bring important insights from the well-developed
deterministic mean field game theory
to bear on strong solutions for
mean field games with common noise of various types.
§.§ Reader's Guide
To review our program in a nutshell,
we first aim to show how the lifted functional approach can recover known
results in mean field games with common noise (Section <ref>).
Emboldened by this consistency,
we next pursue more substantial and uncharted examples
of mean field games with controlled common noise and partial information
(Sections <ref> and <ref>, respectively).
As a sanity check after some admittedly grueling calculations
in Section <ref>,
we are rewarded by confirmation of the separation principle, extending its reach into new territory.
A more detailed outline of the paper is as follows.
Before we can articulate the lifted functional approach, we briefly review some notations in Section <ref> that are commonly used throughout the paper.
In Section <ref>,
after recalling the fundamental forward-backward system of stochastic PDEs (<ref>)
that characterizes a mean field game equilibrium in the presence of common noise,
we state the lifted functional approach for this prototype problem.
In Section <ref>,
we present in straightforward settings the problem formulations associated with the various stochastic PDE systems (<ref>), (<ref>), and (<ref>) studied in the paper; a reader experienced in the interpretations of such systems may wish to skip this section.
Section <ref> can be considered a warm-up in a simpler setting for the more involved calculations of later sections;
nevertheless, this example also confirms the consistency of the lifted functional approach with more classical approaches of the optimal control theory literature.
At last, Section <ref> employs the lifted functional
approach
to explicitly solve a linear quadratic mean field game with common noise;
a reader that is pressed for time may wish to focus on Sections <ref> and <ref>, once acquainted with the notation of the
compensator (<ref>) and compensated time derivative (<ref>) below.
Finally, turning to applications that constitute new results,
Sections <ref> and <ref> adapt the lifted functional approach
to solve mean field games with common noise featuring, respectively, control in the volatility coefficient and the constraint of a partially observed state.
§ NOTATION
Throughout the paper, we work on a filtered probability space
(Ω', , , ) supporting independent standard
d-dimensional Brownian motions
= (W_t)_ and ^0 = (W_t^0)_.
We write ^ := (^_t)_
with ^_t := σ(∪_0 ≤ s ≤ t σ(Y_s) )
for the filtration generated by a given stochastic process = (Y_t)_.
Finally, we write
Ω := C_0([0,T]; ^d) = {ω∈ C([0,T]; ^d) : ω_0 = 0 }
for the path space,
whose elements serve as fixed realizations of the common noise ^0 = (W^0_t)_.
For the linear-quadratic data, we deliberately adopt similar notation to Section 3.5 of Carmona-Delarue <cit.> for the sake of ease of comparison later.
More specifically, we introduce constant
d × d volatility matrix coefficients
σ, σ^0,
deterministic continuous ^d × d-valued functions
(b_t,b̅_t,s_t)_,
deterministic symmetric nonnegative semi-definite d× d
matrix valued continuous functions (q_t, q̅_t)_, and
deterministic symmetric nonnegative semi-definite d× d
parameters q, q̅, s.
In the case of controlling the volatility coefficient of
the common noise,
we will also need a deterministic continuous ^d-valued function (a̅_t)_.
We say
that a functional ψ̂(t,ω)
on [0,T] ×Ω is strictly non-anticipative
if for all t ∈ [0,T] and for all paths ω, η∈Ω,
ψ̂(t,ω) = ψ̂(t,η)
whenever ω_s = η_s for all 0 ≤ s < t.
With a slight abuse of notation,
we sometimes indicate this by writing
ψ̂(t,ω) = ψ̂(t,(ω_s)_0 ≤ s < t).
The functional ψ̂(t,ω) is merely non-anticipative
if ψ̂(t,ω) = ψ̂(t,η)
whenever ω_s = η_s for all 0 ≤ s ≤ t.
Suppose (u_t(x))_ is an ^^0-adapted random field on [0,T] ×^d,
and suppose further that it can be written
as a functional of the form
u_t(x) = û(t,x,^0,W^0_t) := û(t,x,(W^0_s)_0 ≤ s < t,W^0_t ),
where as indicated û(t,x,ω,y) is a strictly non-anticipative function on ^d ×Ω×^d.
This way of writing such functionals
goes back to works of Dupire <cit.> and Peng <cit.> on the functional Itô formula and path dependent PDE theory, respectively,
though we follow the more recent work <cit.> of Cosso-Russo in referring to û(t,x,ω,y) as a lifted functional.
Note also how we indicate the dependence on the path variable ω∈Ω to be strictly
non-anticipative
by adorning the functional with a “hat” or “tilde”, such as “û(t,x,ω,y)” or “r(t,x,ω)” appearing in (<ref>) below.
Note then that the variable y will typically represent the present value of the common noise.
With this discussion, we can now introduce
the compensator and compensated time derivative that play a fundamental role in this paper.
For a given strictly non-anticipative functional ψ̂(t,ω) = ψ̂(t,(ω_s)_0 ≤ s < t) on [0,T] ×Ω,
the compensator of ψ̂(t,ω) is defined by
_ω^y ψ̂(t,ω) := lim_ϵ↓ 0ϵ^-1 [ψ̂(t+ϵ, ω_· + [y-ω_·] 1_[t,t+ϵ)(·)) - ψ̂(t+ϵ,ω) ], y ∈^d.
As we will see below, the name derives from the interpretation that it is exactly the term to “compensate” the naive classical backward HJB equation to enforce strict non-anticipativity.
We remark that although ψ̂(t,ω) is strictly non-anticipative,
the compensated derivative _ω^y ψ̂(t,ω) will in general extract
the present value ω_t; indeed, one expects _ω^ω_tψ̂(t,ω) = 0, i.e.,
_ω^y ψ̂(t,ω)=0
when y = ω_t.
The astute reader will notice that the functional
ψ̂(t,ω) is defined
on the space of continuous paths, yet the key definition
(<ref>) requires evaluating on a path
with a jump.
This common occurrence in the path dependent PDE
literature can be handled in a few different ways.
For example, earlier literature here suggests showing the limit
(<ref>) is independent of
the chosen extension of ψ̂(t,ω)
to Skorokhod space.
We instead refer the reader
to the appendix, which adapts and extends the more recent seminorm topology approach of Section 2.2 from Cosso-Russo <cit.>,
which constitutes a convenient way to restrict
to a unique extension of the functional
when evaluated at a path with a single jump.
This latter perspective is also
convenient because some natural
expressions for the limit (<ref>)
involve evaluating the functional
at a path with a “double jump” at a point (see the Fréchet derivative expression (<ref>) in the appendix),
which would even be outside the scope of Skorokhod space.
However, given the concrete
spirit of this paper, we do not pursue
this technical point further here.
For the sake of simplifying calculations,
we will often find it convenient to combine the normal time derivative
and the new compensator into a single operator ∂_t^y := ∂_t + _ω^y, which we refer to as the compensated time derivative:
∂_t^y ψ̂(t,ω) := lim_ϵ↓ 0ϵ^-1 [ψ̂(t+ϵ, ω_· + [y-ω_·] 1_[t,t+ϵ)(·)) - ψ̂(t,ω) ], y ∈^d.
In particular, we will consider integral representations of solutions to stochastic differential equations, which are straightforward to differentiate using (<ref>).
§ THE LIFTED FUNCTIONAL APPROACH
A typical mean field game system with common noise
can be stated as follows: given any probability measure λ∈_2(^d)
with density ℓ(x), find an ^^0-adapted triple
(u_t(x),v_t(x),m_t(x))_ of random fields on [0,T] ×^d satisfying the forward backward system of
stochastic PDEs
d_t u_t(x) = ( - 1/2 [ a ∂_xx^2 u_t(x) ] -[ σ^0 ∂_x v_t(x) ]dt
+ H(t,x,∂_x u_t(x)) - f(t,x,μ_t) ) dt + v_t(x) · dW^0_t,
d_t m_t(x) = ( 1/2 [ a ∂_xx^2 m_t(x) ] + div_x [ m_t(x) ∂_p H(t,x,∂_x u_t(x) ) ] )dt - div_x [ m_t(x) σ^0 dW^0_t ],
u_T(x) = g(x,μ_T), m_0(x) = ℓ(x), μ_t(dx) := m_t(x)dx,
where we write a := σσ^⊤ +σ^0 (σ^0)^⊤
and “d_t" emphasizes that the total Itô differential is taken in time.
Intuitively, the forward conservation law in (<ref>)
describes how the mass density m_t(x) of some agents, such as a flock of birds,
evolves in time when subject to a random environment W^0_t,
while the backward HJB equation
determines the value function “u_t(x)” of a typical agent responding optimally
to the random evolution of the mass.
The somewhat mysterious random field v_t(x)
is part of the unknowns and
plays the role of ensuring that u_t(x) is ^^0-adapted; e.g., for a flock of birds buffeted by wind,
a typical bird at time t has only observed
the behavior of the wind (W_s^0)_0 ≤ s ≤ t,
but is not allowed to anticipate the future
behavior of the wind (W_s^0)_t < s ≤ T when optimizing.
A solution to the system (<ref>) can naturally be cast as a fixed
point and admits
the interpretation as characterizing a continuum version of Nash optimality.
Motivated by the literature on the functional Itô formula (see Dupire <cit.> and Cont-Fournié <cit.>) and
path dependent PDE theory (see Chapter 11 of Zhang <cit.> and references therein), we have discovered that
if û(t,x,ω,y), m̂(t,x,ω,y) are lifted functionals
of the ^^0-adapted random fields (u_t(x), m_t(x))_0≤ t ≤ T
from (<ref>), then
for each fixed path ω,
the functions (t,x,y) ↦û(t,x,ω,y), m̂(t,x,ω,y)
are determined by a rough forward conservation law
coupled with a classical HJB equation that is “compensated”
by the operator (<ref>) applied to û(t,x,ω,y).
More precisely,
the lifted functional approach
to mean field games with common noise asserts that
the solution
of (<ref>) can be reduced
to a pair of (strictly non-anticipative) lifted functionals
(û(t,x,ω,y), m̂(t,x,ω,y))
satisfying the system
-∂_t û(t,x,ω,y) - 1/2[ A D^2_(x,y)û(t,x,ω,y)] + H(t,x, ∂_x û(t,x,ω,y)) - f(t,x,m̂(t,·, ω,y)) = _ω^y û(t,x,ω,y)
∂_t m̂(t,x,ω, y) - 1/2 [ A D^2_(x,y)m̂(t,x,ω,y) ] - div_x [ m̂(t,x,ω,y) ∂_p H(t,x,∂_x û(t,x,ω, y)) ] = - _ω^y m̂(t,x,ω,y)
û(T,x,ω,y) = g(x,m̂(T,·,ω,y)), m̂(0,x,ω,y) = ℓ(x-σ^0 y),
where we write D^2_(x,y) for the Hessian matrix in both variables (x,y)
and where
A :=
[ σσ^⊤ +σ^0 (σ^0)^⊤ σ^0; σ^0 I_d ].
In the backward equation of (<ref>), the compensator term “_ω^y û(t,x,ω,y)” serves to enforce the strict non-anticipativity condition in
the path variable.
However, the rather unexpected appearance of “_ω^y” in the forward equation exactly serves to exhibit
duality
between the equations.
To our knowledge,
this duality at the pathwise level
appears to be new and is nonobvious to illustrate otherwise.
To be more precise,
exploiting the duality of the
original system (<ref>)
requires taking an expectation, i.e.,
averaging over the path.
More classically, as indicated above, for each fixed ω∈Ω, the functional m^ω(t,x):= m̂(t,x,ω, ω_t)
is known to satisfy, in a path-by-path sense, the rough conservation law
d_t m^ω(t,x)
= ( 1/2[σσ^⊤∂_xx^2 m^ω(t,x)] + div_x [ m^ω(t,x) ∂_p H(t,x,∂_x û(t,x,ω, ω_t)) ] ) dt
- div_x [m^ω(t,x) σ^0 d ω_t ]
m^ω(0,x) = ℓ(x),
which in turn can be solved by the flow transformation method of
Lions-Souganidis <cit.>. More precisely,
one looks for a solution of the form
m^ω(t,x)=m̂(t,x,ω, ω_t):= r(t,x-σ^0 ω_t,ω),
where r(t,x,ω) solves a classical (though ω-dependent) PDE without a “dω_t” term:
∂_t r(t,x,ω)
= 1/2 [ σσ^⊤∂_xx^2 r(t,x,ω) ] + div_x [r(t,x,ω) ∂_p H(t,x+σ^0ω_t,∂_x û(t,x+σ^0ω_t,ω, ω_t)) ]
r(0,x,ω) = ℓ(x).
As indicated by the notation, r(t,x,ω)
is readily seen to depend only on the strict prior history (ω_s)_0 ≤ s < t
of the fixed path ω, allowing
us to identify m̂(t,x,ω, y) = r(t,x-σ^0y,ω).
However, this classical
perspective does not showcase
the duality with
the backward equation, as the new system (<ref>) exhibits.
We next claim that once a fixed point solution pair (û^*(t,x,ω,y), m̂^*(t,x,ω,y))
is found for the solution loop of
(<ref>),
the triple of random fields is defined as
(u^*_t(x),v^*_t(x),m^*_t(x)):=
(û^*(t,x,^0,W^0_t), ∂_y û^*(t,x,^0,W^0_t), m̂^*(t,x,^0,W^0_t)),
and is easily seen to be a strong solution of the original mean field game system with common noise (<ref>).
Indeed, as long as the lifted functional û(t,x,ω,y) is “nice enough,” the principles behind the so-called functional Itô formula (see Dupire <cit.> and Cont-Fournié <cit.>) suggest we can compute the total differential in time as[Roughly speaking,
the functional Itô formula is just the ordinary Itô formula in the variables (t,y) of a lifted functional û(t,x,^0, W^0_t), i.e., the dependence
in the strict history variable ω can be held infinitesimally fixed in time.]
d_t u^*_t(x) Functional Itô= ( ∂_t û^* + 1/2Δ_y û^* )(t,x,^0, W^0_t) dt + ∂_y û^*(t,x,^0, W^0_t) · dW^0_t
= ( - 1/2[ a ∂_xx^2 u^*_t(x)] - [σ^0 ∂_x v^*_t(x) ] + H(t,x,∂_x u^*_t(x)) - f(t,x,m^*_t) ) dt + v^*_t(x) · dW_t^0,
where we used in the second equality the fact that _ω^ω_tû(t,x,ω,ω_t) = 0;
similarly, since m^*_t(x) has the form m^*_t(x) = r(t,x-σ^0 W^0_t,^0)
where r(t,x,ω) solves
(<ref>), we have
d_t m^*_t(x) = ( ∂_t r + 1/2 [ σ^0 (σ^0)^⊺∂_xx^2 r ] )(t,x-σ^0W_t^0,^0) dt - ∂_x r(t,x-σ^0W_t^0,^0) ·σ^0 dW^0_t
= ( 1/2[a ∂_xx^2 m^*_t(x)] + div_x [ m^*_t(x) ∂_p H(t,x,∂_x u^*_t(x)) ] ) dt - div_x [m^*_t(x) σ^0 dW^0_t ],
where in the last equality we implicitly performed an Itô-Stratonovich conversion.
Thus the triple of random fields
(<ref>) can serve
as a strong solution of (<ref>).
For readers familiar with the notion of the master equation from Cardaliaguet-Delarue-Lasry-Lions <cit.>,
the main compensator term
in the backward equation of (<ref>)
takes on a particularly nice form.
To see this, suppose u_t(x) = (t,x,m_t) for some nice : [0,T] ×^d ×_2(^d) → and that we know
m_t(x) = m̂(t,·,^0, W^0_t), where we recall the form m̂(t,x,ω,y) = r(t,x-σ^0 y,ω).
Recall
the basic relationship
∂_μ(t,x,μ)(v) =∂_v (δ_μ)(t,x,μ)(v),
where we write “δ_μ” for the linear functional derivative
and “∂_μ” for the Wasserstein gradient.
Then, recalling v_t(x) = ∂_yû(t,x,^0,W^0_t) in our setting,
we can compute
v_t(x) = ∂_y û(t,x,^0, W^0_t) = ∫_^d (δ_μ)(t,x,m_t)(v)
· (∂_y m̂)(t,v,^0, W^0_t) dv
= -∫_^d (δ_μ)(t,x,m_t)(v)
· (σ^0)^⊤(∂_xr)(t,v-σ^0 W^0_t,^0)] dv
= ∫_^dσ^0 (∂_μ)(t,x,m_t)(v) m_t(v) dv,
where the last equality is integration by parts.
Observe this is exactly the formula from Corollary 2.12 of Cardaliaguet-Delarue-Lasry-Lions <cit.> for the process v_t(x).[The factor σ^0 is due to scaling differently than the corresponding system (31) of Cardaliaguet-Delarue-Lasry-Lions <cit.>.]
The punchline of the above is we have the formula
∂_yû(t,x,ω,y) = ∫_^dσ^0 (∂_μ)(t,x,m̂(t,·,ω,y))(v) m̂(t,v,ω,y) dv.
Next recall we can identify the compensator of m̂(t,x,ω,y) as
_ω^y m̂(t,x,ω,y) = -div_x [ m̂(t,x,ω,y) ( F(t,x,ω,y) - F(t,x,ω,ω_t) ) ]
where
F(t,x,ω,y) := ∂_x û(t,x+σ^0(ω_t - y),ω,y).
In turn, these items imply the compensator in the backward equation of (<ref>) can
be expressed as
_ω^y û(t,x,ω,y) = ∫_^d (∂_μ)(t,x,m̂(t,·,ω,y))(v) m̂(t,v,ω,y) ( F(t,v,ω,y) - F(t,v,ω,ω_t) ) dv.
The main issue with this formula is that one in general may not have access to ∂_μ(t,x,μ)(v).
Adopting a combination of the perspectives of rough path theory and path dependent PDEs, one could introduce an alternative
notion of “pathwise solution” that consists of a pair of merely non-anticipative
functionals (u(t,x,ω), m(t,x,ω)) on [0,T] ×^d ×Ω
such that, for almost every (with respect to Wiener measure) α-Hölder geometric rough path
= (ω, ) (i.e., t ↦ω_t is a fixed realization of ^0 and (s,t) ↦_s,t
is a fixed realization of the iterated Stratonovich integral ∫_s^t W^0_s,r⊗∘ d W^0_r),
the pair of functions (t,x) ↦ u(t,x,ω), m(t,x,ω) satisfies
the rough MFG system[See, e.g., Cosso-Russo <cit.> for the definition of the vertical ∂_ω = ∂_ω^V, which is simply the spatial path dependent derivative found in most any reference from the path dependent PDE literature.]
{ d_t u(t,x,ω) = ( - 1/2[ a ∂_xx^2 u(t,x,ω)] - [σ^0 ∂_x ∂_ω u(t,x,ω) ] + H(t,x , ∂_x u(t,x,ω) ) - f(t, x,m(t,·,ω)) ) dt
- 1/2[∂_ωω^2 u(t,x,ω)] dt + ∂_ω u(t,x,ω) · d_t,
d_t m(t,x,ω) = ( 1/2[σσ^⊺∂_xx^2 m(t,x,ω)] + div_x [ m(t,x,ω) ∂_p H(t,x , ∂_x u(t,x,ω) ) ] ) dt - div_x [ m(t,x,ω) σ^0 d_t ],
u(T,x,ω) = g(x, m(T,·,ω)), m(0,x,ω) = ℓ(x),
.
where, as indicated the bold differential, “d_t” can be understood in the rough path theory sense
(see, e.g., Friz-Victoir <cit.>).
In particular,
the stochastic term “v_t(x) · dW^0_t”
from the backward equation in (<ref>)
corresponds to the two terms “- 1/2[∂_ωω^2 u(t,x,ω)] dt + ∂_ω u(t,x,ω) · d_t”
in (<ref>).
Fortunately, our compensated solutions
of (<ref>) will furnish such an intermediate notion of pathwise solution to
(<ref>)
by calculations parallel to
(<ref>), (<ref>) above, but
based instead on a pathwise (lifted) functional Itô formula of the form
d_t φ(t,ω) = (∂_t φ̂)(t,ω,ω_t) dt + (∂_y φ̂)(t,ω,ω_t) · d_t,
given a suitable lifted functional φ̂(t,ω,y) of φ(t,ω).
This formula follows as a consequence of Keller-Zhang <cit.>, recited as (2.5)
and (2.11)
of Buckdahn-Keller-Ma-Zhang <cit.>.
However, getting to the point of this remark, we otherwise omit this intermediate path-by-path notion since (besides being less straightforward for calculations in our opinion)
it does not exhibit that there is an underlying duality between the two equations at a pathwise level,
as our compensated system (<ref>) does.
Indeed,
eliminating the “d_t”
term would seem to require averaging the paths over Wiener measure, thus leaving the pathwise formulation.
§ PROBLEM FORMULATIONS
Now that we have reviewed the lifted functional approach in the setting of a typical mean field game with common noise,
we step back to review the various settings where we will apply the lifted functional method.
For the sake of clarity, we state these formulations somewhat informally and with straightforward data (in particular, these problems will be solved with more general data below).
More precisely, we illustrate the lifted functional approach for four problems, each of which admits an exact solution when the data fits into the framework of linear-quadratic-Gaussian control theory:
* a stochastic control problem with a path-dependent terminal cost
* a mean field game with common noise
* a mean field game with controlled common noise
* a mean field game with common noise and partial information
Problem 1:
As a warm-up, we start by considering a
stochastic control problem with a path-dependent terminal cost as follows:
given an initial condition x_0 ∈^d,
minimize 𝔼[∫_0^T 1/2|α_t|^2 dt + X_T·∫_0^T W_s^0 ds]
over ^,^0-adapted processes (α_t)_, subject to the dynamical constraint
dX_t = α_t dt + dW_t, X_0 = x_0.
Intuitively, the controller will drive the process away from the anticipated random cost, ∫_0^t W_s^0 ds. Indeed, we find the controller to be given as a linear feedback of ∫_0^t W_s^0 ds.
The explicit solution to this problem is covered in Section <ref>.
Problem 2:
We consider a linear-quadratic mean field game in the spirit of Section 3.5 of Carmona-Delarue <cit.>:
given an initial law λ∈_2(^d) and a ^^0-adapted flow of probability measures = (μ_t)_,
we first solve, writing μ̅_t = ∫_ x μ_t(dx) for the mean position of players,
minimize 𝔼[∫_0^T 1/2|α_t|^2 dt + 1/2(X_T- s μ̅_T)^2 ]
over ^,^0-adapted processes (α_t)_, subject to the dynamical constraint
dX_t = (b_t X_t+ b̅_t μ̅_t + α_t) dt + σ dW_t + σ_0 dW_t^0, X_0 ∼λ.
We denote by (X_t^α^*)_ the solution of the dynamical constraint with optimal control (α^*_t)_ and second solve the fixed point problem μ_t = (X_t^α^*|^^0_t), t ∈ [0,T], i.e., μ_t will be the conditional law of an optimally controlled
process X_t^α^* given the common noise ^0 = (W^0_t)_.
In this problem, the mean position of players μ̅_t is translated by a Brownian common noise. The solution we find is a linear function of the player's position and the mean position of players.
The explicit solution of this problem for a class of linear-quadratic data is covered in Section <ref>.
Note this Problem 3 implicitly involves a term of the form “X_T μ̅_T,”
and in turn we will see μ̅_T will involve “∫_0^T W^0_s ds”. Thus,
this problem features the basic structure of the path dependent cost problem
of Section <ref>, which motivated its inclusion in this paper.
Problem 3:
We consider a similar setting as the previous problem but with a controlled volatility coefficient of the common noise:
first, given an initial law λ∈_2(^d), a parameter (a̅_t)_, and a flow of probability measures = (μ_t)_,
we first solve
minimize 𝔼[∫_0^T 1/2|α_t - a̅_t|^2 dt + 1/2(X_T- s μ̅_T)^2 ]
over ^,^0-adapted processes (α_t)_, subject to the dynamical constraint
dX_t = (b_t X_t + b̅_t μ̅_t) dt + σ dW_t + α_t dW_t^0, X_0 ∼λ.
Second, we solve the fixed point problem μ_t = (X_t^α^*|^^0_t), t ∈ [0,T].
The solution we find is a deterministic time dependent multiple of the parameter a̅_t, similar to examples in the literature (see, e.g., Proposition 5.1 of Ankirchner-Fromm <cit.>).
However, the factor we get reflects parameters not only from the diffusion coefficient,
but also from
the so-called Itô-Wentzell correction term, which involves the control against the unknown process “v_t(x)” that enforces the ^^0-adaptivity constraint in the stochastic backward HJB in (<ref>).
The explicit solution of this problem for a class of linear-quadratic data is covered in Section <ref>.
Problem 4:
Our final problem considers a mean field game with common noise and partial
information: first, given an initial law λ∈_2(^d) and a ^^0-adapted flow of probability measures = (μ_t)_,
we solve
minimize 𝔼[∫_0^T f(t,X_t,μ_t, α_t) dt + g(X_T, μ_T)],
subject to a dynamical constraint
dX_t = b(t,X_t,μ_t, α_t) dt + σ dW_t + σ^0 dW^0_t, X_0 = x_0;
however, there is an additional constraint that one must optimize over controls = (α_t)_ that are progressively measurable with respect to ^^0,, where = (Z_t)_ is the so-called observation process
dZ_t = h(t,X_t, μ_t) dt + dθ̃_t
with = (θ̃_t)_ a Brownian motion with positive definite covariance Θ̃ and independent of = (W_t)_.
Second, one solves the fixed point problem μ_t = (X_t^α^*|^^0_t), t ∈ [0,T].
Finally, we recall the mean field problem with common noise and partial information above can be interpreted as the limit of an N-player dynamical game: given a strategy profile ^N = (^N,i)_i=1^N,
the ith player, 1 ≤ i ≤ N, in the search for Nash optimality, solves the optimal control problem
minimize 𝔼[∫_0^T f(t,X_t^N,i,μ_^N_t,β_t) dt + g(X_T^N,i,μ_^N_T)].
over ^^0,-adapted controls = (β_t)_,
subject to the dynamical constraint
dX_t^N,k =
b(t,X_t^N,i,μ_^N_t ,β_t) dt + σ dW^i_t + σ^0 dW_t^0, k=i,
b(t,X_t^N,k,μ_^N_t,α^N,k_t) dt + σ dW^k_t + σ^0 dW_t^0, k≠ i,
and subject to the observation process
dZ_t^i = h(t,X_t^i, μ_^N_t) dt + dθ̃_t^i,
where μ_^N_t := 1/N∑_j=1^N δ_X_t^N,j is the empirical measure of players.
We emphasize that players have knowledge of the common noise and their individual observation process.
Also, one can reason from this N-player setting that we
expect the N →∞ limit of
the empirical measures μ_^N_t := 1/N∑_j=1^N δ_X_t^N,j should
converge to the conditional law of the state given the common noise ^0 = (W^0_t)_ with respect to ,
thus justifying the formulation made above.
As just reviewed, the partially observed control problem is made difficult by the necessity to consider non-Markovian controls that incorporate the entirety of the history of the observation process.
As such, the problem does not satisfy an ordinary dynamic programming principle. With the compensated HJB equation,
a dynamic programming principle is recovered in some sense.
Despite the mean field coupling,
we illustrate how the solution for a linear-quadratic-Gaussian problem is still solved by the Kalman filter and the separation principle, as classically expected.
See Section <ref>, especially equation (<ref>) and nearby discussion, for more on these concepts and the explicit solution of this problem for a class of linear-quadratic data.
§ A PATH-DEPENDENT COST PROBLEM
As a warm-up, we first consider a simple scenario
where there is no coupling between the
forward and backward equations of (<ref>),
which thus reduces to a classical optimal control problem.
The interest in this example
is that we can observe, in a simple setting, how our method
is consistent with the classical optimal control theory literature.
Accordingly, we first consider the solution to the path dependent cost Problem 1
reviewed in the previous Section <ref>.
In the compensated HJB approach, we will solve for the lifted functional determining
the random value function.
The lifted value function is expected to satisfy a dynamic programming principle, i.e.,
û(t,x,ω,y) = inf_(α_s)_t≤ s≤ T𝔼[∫_t^T 1/2|α_s|^2ds + X_T^α·(∫_0^t ω_s ds + ∫_t^T[y+W^0_s - W_t^0] ds)],
where
dX_s^α = α_s ds+dW_s, X^α_t = x, t ≤ s ≤ T.
Recalling the compensated time derivative ∂_t^y
of (<ref>),
the compensated HJB equation will have the form
- ∂_t^y û(t,x,ω,y) -1/2Δ_x û(t,x,ω,y) -1/2Δ_y û(t,x,ω,y) + 1/2|∇_x û(t,x,ω,y)|^2= 0
û(T,x,ω, y)= x·∫_0^T ω_s ds.
Now, we make the ansatz
û(t,x,ω,y) = a_t x^2 + b_t y^2 + 2 c_t x y + d_t + e_t (∫_0^t ω_s ds)^2 + 2 f_t x ∫_0^tω_s ds + 2 g_t y ∫_0^tω_s ds.
Note the terminal condition û(T,x,ω, y)= x·∫_0^T ω_s ds is satisfied with the parameter terminal conditions
a_T = b_T = c_T = d_T = e_T = g_T = 0, f_T = 1/2.
We then compute
∂_t^y ( ∫_0^t ω_s ds ) = y,
plugging the ansatz into the compensated HJB equation we get
0= -a_t' x^2 - b_t' y^2 - 2 c_t' x y - d_t' - e_t' (∫_0^t ω_s ds)^2 - 2 f_t' x ∫_0^tω_s ds - 2 g_t' y ∫_0^tω_s ds
- 2 e_t y ∫_0^tω_s ds -2 f_t x y - 2 g_t y^2 - a_t - b_t
+2 a_t^2 x^2 + 2 c_t^2 y^2 + 2 f_t^2(∫_0^tω_s ds)^2 +4 a_t c_t x y + 4 a_t f_t x ∫_0^tω_s ds + 4 c_t f_t y ∫_0^tω_s ds.
By collecting terms corresponding to x^2, y^2, xy,(∫_0^tω_s ds)^2, x ∫_0^tω_s ds, y ∫_0^tω_s ds, we arrive at the following system of ordinary differential equations:
* |x|^2 : a'_t = 2 a_t^2,
* |y|^2 : b'_t = -2 g_t + 2 c_t^2,
* |x y| : c'_t = -f_t + 2 a_t c_t,
* 1 : d_t' = -a_t - b_t,
* (∫_0^tω_s ds)^2 : e_t' = 2 f_t^2,
* x∫_0^tω_s ds : f_t' = 2 a_t f_t,
* y∫_0^tω_s ds : g_t' = -e_t + 2 c_t f_t.
We first solve a_t=0, thus f_t=1/2 is constant. Now we can see that c_t' = -1/2 so c_t=1/2(T-t), and e_t' = 1/2 so e_t = -1/2(T-t).
We can solve for g_t'=(T-t) as g_t = -1/2(T-t)^2.
Now b_t' = 3/2(T-t)^2 and b_t = -1/2(T-t)^3. We finally have that
d_t' = 1/2(T-t)^3 so d_t=-1/8(T-t)^4.
û(t,x,ω,y) = -1/2 (T-t)^3 y^2 + (T-t) x y - 1/8(T-t)^4
-1/2 (T-t)(∫_0^tω_s ds)^2 + x ∫_0^tω_s ds - (T-t)^2 y ∫_0^tω_s ds,
so the optimal ^^0-adapted feedback control = (α^*_t)_ is given by
α^*_t = -∇_x û(t,x,𝐖^0, W_t^0) =-(T-t)W_t^0 - ∫_0^tW_s^0 ds.
Then the optimal expected value at time zero is u(0,x_0,ω,0) = d_0 =-1/8T^4,
which is notably independent of the initial position X_0=x_0.
§.§ Comparison with the literature
A more classical approach to the path-dependent cost problem might be to make the problem Markovian by introducing the new state
variables Y_t:=W_t^0 and Ξ_t := ∫_0^t Y_s ds.
In these variables, the problem turns into a stochastic control problem with value function v(t,x,y,ξ) solving the degenerate HJB equation
-∂_t v(t,x,y,ξ) -1/2Δ_x v(t,x,y,ξ) -1/2Δ_y v(t,x,y,ξ) - y·∇_ξ v(t,x,y,ξ) + 1/2|∇_x v(t,x,y,ξ)|^2= 0
v(T,x,y,ξ)= x·ξ.
Observe the correspondence between this approach
with the lifted functional approach is
û(t,x,ω,y) = v(t,x,y, ∫_0^tω_s ds).
Then we can note that the compensated time derivative satisfies
∂_t^y û(t,x,ω,y) = ∂_t v(t,x,y, ∫_0^tω_s ds)+ y·∂_ξ v(t,x,y, ∫_0^tω_s ds),
establishing consistency between the two approaches.
We remark, however, that this more classical reasoning does not seem to work in general for
the other more complicated problems we study. Indeed, the desired structure to make the problem Markovian as above cannot be easily determined in advance.
Finally, given the lifted functional approach was motivated
by concepts from the literatures on the functional Itô formula
and path-dependent PDE theory,
we mention that there is a path-dependent PDE that
the functional u(t,x,ω) := û(t,x,ω, ω_t)
will satisfy that one may work with instead to arrive at the same solution.
Again, we refer the reader to Chapter 11 of Zhang <cit.>.
§ MEAN FIELD GAME WITH COMMON NOISE
§.§ The linear-quadratic data for the MFG problem
Let us recall the linear-quadratic data
from Problem 2 in Section <ref>:
writing μ̅:= ∫_^dξ μ(dξ), we set[We write b(t,x,μ,α) for the drift coefficient of the state process, as in
(<ref>).]
b(t,x,μ,α) : = b_t x+b̅_t μ̅ + α,
f(t,x,μ,α) := 1/2 ( |α|^2 + x^⊤ q_t x + (x - s_t μ̅)·q̅_t (x - s_t μ̅) ),
g(x,μ) := 1/2 ( x^⊤ q x + (x - s μ̅)·q̅ (x - s μ̅) ),
where we refer to Section <ref> for the description of these given parameters.
Now we make the ansatz that the solution of (<ref>) has the form
u_t(x) =
1/2 ( x ·Γ_t x + μ̅_t ·Γ^0_t μ̅_t +
2x ·Λ^0_t μ̅_t )
+ Δ_t
so that the optimal feedback function is given by
- ∂_x u_t(x)
= - Γ_t x - Λ^0_t μ̅_t.
Hence, we have
dX_t = ( (b_t - Γ_t) X_t +(b̅_t - Λ^0_t ) μ̅_t ) dt + σ dW_t + σ^0 dW^0_t,
and taking expectations of this equation conditional on ^^0_t
yields
dμ̅_t = ( b_t + b̅_t - Γ_t - Λ^0_t ) μ̅_t dt + σ^0 dW^0_t,
which has an explicit solution
of the form μ̅_t = μ̅(t,^0, W^0_t)
where
μ̅(t,ω, y)
= Φ_t (μ̅_0
+ ∫_0^t Φ_s^-1 (b_s + b̅_s - Γ_s - Λ^0_s) σ^0 ω_s ds )
+ σ^0 y,
where (Φ_t)_ is the solution of the matrix-valued ODE
Φ̇_t = (b_t + b̅_t -Γ_t - Λ^0_t) Φ_t, Φ_0 = 1.
Thus, the ansatz for the lifted value function becomes
û(t,x,ω,y) =
1/2 ( x ·Γ_t x + μ̅(t,ω, y) ·Γ^0_t μ̅(t,ω, y) +
2x ·Λ^0_t μ̅(t,ω, y) )
+ Δ_t.
Now we may begin computing
the terms appearing in the lifted functional
backward equation (<ref>).
As mentioned there, we will find it convenient for explicit calculations
to combine the time derivative and compensator
into the compensated time derivative ∂_t^y := ∂_t + _ω^y
defined in (<ref>).
We first compute
∂_t^y μ̅(t,ω, y)
=
(b_t + b̅_t - Γ_t - Λ^0_t) μ̅(t,ω, y).
∂_y μ̅(t,ω, y)
= σ^0
Then we have
∂_t^y û(t,x,ω,y)
= 1/2 ( x ·Γ̇_t x + 2x ·Λ̇^0_t μ̅(t,ω, y) + μ̅(t,ω, y) ·Γ̇^0_t μ̅(t,ω, y) )
+ Δ̇_t
+ (b_t + b̅_t - Γ_t - Λ^0_t) μ̅(t,ω, y) · ( Γ^0_t μ̅(t,ω, y)
+ Λ^0_t x )
and can further compute
∂_x û(t,x,ω,y) =
Γ_t x
+ Λ^0_t μ̅(t,ω,y), ∂_xx^2 û(t,x,ω,y) =
Γ_t
∂_y û(t,x,ω,y)
= σ^0 ( Γ^0_t μ̅(t,ω,y)
+ Λ^0_t x ), ∂_yy^2 û(t,x,ω,y)
= (σ^0)^⊤Γ^0_t σ^0,
∂_x ∂_y û(t,x,ω,y)
=
(σ^0)^⊤Λ^0_t
Now, the compensated HJB equation will take the form
- ∂_t^y û(t,x,ω,y) -1/2 ( [a ∂_xx^2 û(t,x,ω,y) ] + Δ_y û(t,x,ω,y) ]
+ 2 [σ^0 ∂_x ∂_y û(t,x,ω,y) ]
)
+ 1/2 | ∂_x û(t,x,ω,y) |^2
- ∂_x û(t,x,ω,y) · ( b_t x + b̅_t μ̅(t,ω, y) )
= 1/2 ( x^⊤ q_t x + (x - s_t μ̅(t,ω, y))^⊤q̅_̅t̅ (x - s_t μ̅(t,ω, y)) ),
with terminal condition
û(T,x,ω,y) = 1/2 ( x^⊤ q x + (x - s μ̅(T,ω, y))·q̅ (x - s μ̅(T,ω, y)) ).
Inputting the above calculations in the compensated equation gives
- 1/2 ( x ·Γ̇_t x + μ̅(t,ω, y) ·Γ̇^0_t μ̅(t,ω, y) +
2x ·Λ̇^0_t μ̅(t,ω, y) )
- Δ̇_t
-(b_t + b̅_t - Γ_t - Λ^0_t) μ̅(t,ω, y) · ( Γ^0_t μ̅(t,ω, y)
+ Λ^0_t x ) - 1/2 ( [a Γ_t ] + [ σ^0 (σ^0)^⊤Γ^0_t]
+ 2 [σ^0 (σ^0)^⊤Λ^0_t]
)
+ 1/2 | Γ_t x + Λ^0_t μ̅(t,ω,y) |^2 - ( Γ_t x + Λ^0_t μ̅(t,ω,y) )· ( b_t x + b̅_t μ̅(t,ω,y) )
= 1/2 ( x^⊤ q_t x + (x - s_t μ̅(t,ω,y))·q̅_t (x - s_t μ̅(t,ω,y)) ).
We now collect terms (symmetrizing for the squared terms) to arrive at the following closed system of Riccati equations:
* |x|^2 : Γ̇_t = Γ_t^⊤Γ_t - Γ_t^⊤ b_t - b_t^⊤Γ_t - ( q_t + q̅_t ) , Γ_T = q+q̅ ,
* x μ̅_t : Λ̇_t^0 = (Λ^0_t)^⊤Λ^0_t -(Λ^0_t)^⊤ (b_t + b̅_t - Γ_t) + Γ_t^⊤Λ^0_t - Γ_t^⊤ b̅_t - b_t^⊤ Λ_t^0 + q̅_t s_t, Λ^0_T = - q̅ s,
* μ̅_t^2 : Γ̇^0_t = -(b_t + b̅_t - Γ_t - Λ^0_t)^⊤Γ^0_t -(Γ^0_t)^⊤(b_t + b̅_t - Γ_t - Λ^0_t)
+ (Λ^0_t)^⊤Λ^0_t - (Λ^0_t)^⊤b̅_t - b̅_t^⊤Λ^0_t - s_t^⊤q̅_t s_t , Γ^0_T = s^⊤q̅ s,
* 1 : Δ̇_t = - 1/2[(σσ^⊤ + σ^0 (σ^0)^⊤) Γ_t ] - 1/2[ σ^0 (σ^0)^⊤Γ^0_t]
- [σ^0 (σ^0)^⊤Λ^0_t], Δ_T = 0.
Notice that the equations for Γ_t, Λ^0_t are quadratic
Ricatti equations, while the equation for Γ^0_t
is linear.
§.§ Discussion of the Solvability of the Ricatti Equations
Standard ODE theory applies to guarantee there exists a unique solution to the system of equations for at least a short time. The only barrier to global existence is if the matrices Γ_t or Λ^0_t diverge (since the μ^2 equation is linear in Γ_t^0 is does not pose a barrier to global existence). An upper bound, in the sense of positive semidefinite matrices, for Γ_t will always hold by a Gronwall argument: that Γ_t≤ M_t where M_t solves the linear ODE
Ṁ_t = -M_t b_t - b_t^⊤ M_t -(q_t+q̅_t), M_T=q+q̅.
A lower bound of Γ_t≥ 0 holds so long as q+q̅ and q_t+q̅_t remain positive semidefinite.
For Λ_t^0, we consider Λ̃_t= Γ_t+Λ_t^0, which solves:
Λ̇̃̇_t = Λ̃_t^⊤ Λ̃_t -b_t^⊤ Λ̃_t - Λ̃_t^⊤ (b_t+b̅_t) - (q_t+q̅_t- q̅_t s_t) = 0,
with Λ̃_T = q+q̅- q̅ s. We assume that q_t+q̅_t- q̅_t s_t is symmetric, and b̅_t is a scalar times the identity matrix, so that Λ̃_t remains symmetric.
Similar to the argument for Γ_t, there is a global solution so long as q_t+q̅_t- q̅_t s_t and q+q̅- q̅ s are positive semidefinite. This same result appears in <cit.>, where an example is also given that shows how solutions exist only for a finite time period if the positive semidefinite condition fails for the problem data (that is, q_t+q̅_t-q̅_t s_t≱0).
§.§ Comparison with the literature
The mean field game system with common noise
can be interpreted as the system of characteristics
for the master equation set on the Wasserstein space _2(^d)
of probability measures with finite second moment.
For the linear-quadratic data of (<ref>),
the master equation has the form
(see display (4.41) of Carmona-Delarue <cit.>):
- ∂_t U(t,x,μ) - 1/2 [(σσ^⊤ + σ^0 (σ^0)^⊤) ∂_xx^2 U(t,x,μ) ] - (b_t x + b̅_t μ̅_t) ·∂_x U(t,x,μ) + 1/2| ∂_x U(t,x,μ) |^2
- ∫_^d [ σ^0 (σ^0)^⊤∂_x∂_μ U(t,x,μ)(v) ] μ(dv) - 1/2∫_^d [(σσ^⊤ + σ^0 (σ^0)^⊤) ∂_v∂_μ U(t,x,μ)(v) ] μ(dv)
- 1/2∫_^d∫_^d [σ^0 (σ^0)^⊤∂_μμ^2 U(t,x,μ)(v,v') ] μ(dv) μ(dv')
- ∫_^d∂_μ U(t,x,μ)(v) · ( b_t v + b̅_t μ̅ -(∂_x U)(t,v,μ) ) μ(dv)= 1/2 x· q_t x + 1/2(x - s_t μ̅)·q̅_t (x - s_t μ̅) ,
(t,x,μ) ∈ [0,T) ×^d ×_2(^d)
U(T,x,μ) = 1/2 x· q x + 1/2 (x - sμ̅)·q̅ (x - sμ̅), (x,μ) ∈^d ×_2(^d)
Here, “∂_μ” is the gradient on the Wasserstein space P_2(),
which can formally be interpreted as “∂_v δ/δμ U(t,x,μ)(v),” with δ/δμ denoting the linear functional (i.e., Fréchet) derivative
in the vector space of all finite signed measures.
As mentioned above and as in display (22) of Cardaliaguet-Delarue-Lasry-Lions <cit.>,
the relationship between a solution (u_t(x), v_t(x), m_t(x))_ of the characteristic equations (<ref>)
and a solution U(t,x,μ) of
(<ref>) should be given by u_t(x) = U(t,x,m_t).
Hence, we expect to have the same ansatz
U(t,x,μ)=1/2 ( x ·Γ_t x + μ̅·Γ^0_t μ̅ +
2x ·Λ^0_t μ̅ )
+ Δ_t
We then compute
∂_t U(t,x,μ)
=
1/2 ( x ·Γ̇_t x + μ̅·Γ̇^0_t μ̅ +
2x ·Λ̇^0_t μ̅ )
+ Δ̇_t
∂_x U(t,x,μ)
=
Γ_t x + Λ^0_t μ̅, ∂_xx^2 U(t,x,μ)
=
Γ_t,
∂_μ U(t,x,μ)(v) =
Γ_t^0 μ̅ + x ·Λ^0_t, ∂_μμ^2 U(t,x,μ)(v) =
Γ_t^0,
∂_x ∂_μ U(t,x,μ)(v) =
Λ^0_t, ∂_v ∂_μ U(t,x,μ)(v) = 0.
Plugging these calculations
in the equation gives
- 1/2 ( x ·Γ̇_t x + μ̅·Γ̇^0_t μ̅ +
2x ·Λ̇^0_t μ̅ )
- Δ̇_t
- 1/2 [(σσ^⊤ + σ^0 (σ^0)^⊤) Γ_t ] - (b_t x + b̅_t μ̅_t) · ( Γ_t x + Λ^0_t μ̅ ) + 1/2| Γ_t x + Λ^0_t μ̅ |^2
- [ σ^0 (σ^0)^⊤Λ^0_t ]
- 1/2 [σ^0 (σ^0)^⊤Γ_t^0 ] - ( Γ_t^0 μ̅ + x ·Λ^0_t ) · ( b_t + b̅_t - Γ_t - Λ^0_t )μ̅
= 1/2 x· q_t x + 1/2 (x - s_t μ̅)·q̅_t (x - s_t μ̅).
We then arrive at the same set of equations as in Section <ref>.
§ MEAN FIELD GAME WITH CONTROLLED COMMON NOISE
Suppose we have a more general state process (X^_t)_ with dynamics of the form
dX_t = b(t,x,μ_t, α_t) dt
+ σ(t,x,μ_t,α_t) dW_t
+ σ^0(t,x,μ_t,α_t) dW^0_t.
Write a(t,x,μ,α):= ( σσ^⊤ +σ^0(σ^0)^⊤ ) (t,x,μ,α)
and define
α^*(t,x,μ, p,X,Q):=
_α{1/2 [a(t,x,μ,α) X ] + [σ^0(t,x,μ,α) Q ] + p · b(t,x,μ,α) + f(t,x,μ,α) }.
Then, given an ^^0-adapted measure flow = (μ_t)_, the stochastic HJB
will have the form: find a pair (u_t(x),v_t(x))_ of ^^0-adapted random fields such that
d_t u_t(x) = - 1/2 [a(t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) ∂_xx^2 u_t(x) ] dt
- [σ^0(t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) ∂_x v_t(x) ] dt
- ∂_x u_t(x) · b( t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) dt
- f( t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) dt + v_t(x) · dW^0_t
Besides being fully nonlinear, this stochastic HJB poses a new difficulty
of the optimizer α^* potentially introducing additional nonlinearities based
on the unknown random field v_t(x).
Fortunately, the lifted functional approach shows how to reduce consideration to a more
classical-looking scenario.
Indeed, the (fully nonlinear) compensated HJB equation involves finding a lifted functional û(t,x,ω,y) satisfying
- ∂_t û(t,x,ω,y) - 1/2 [A(t,x,m̂,α^*(t,x,m̂, ∂_x û, ∂_xx^2 û ,∂_xyû) ) D^2_(x,y)û ]
- ∂_x û· b( t,x,m̂,α^*(t,x,m̂ , ∂_x û, ∂_xx^2 û ,∂_xyû) ) = f( t,x,m̂,α^*(t,x,m̂, ∂_x û, ∂_xx^2 û ,∂_xyû) ) + _ω^y û ,
where D^2_(x,y) is the Hessian in (x,y) and where
A(t,x,μ,α) :=
[ ( σσ^⊤ +σ^0 (σ^0)^⊤)(t,x,μ,α) σ^0(t,x,μ,α); σ^0(t,x,μ,α) I_d ].
§.§ Linear Quadratic data for controlled volatility
For simplicity, we work in dimension d=1, though the manipulations below may be generalized to higher dimensions.
Set the linear-quadratic cost data similarly as before to
f(t,x,μ,α) := 1/2 ( |α -a̅_t|^2 + x^⊤ q_t x + (x - s_t μ̅)^⊤q̅_t(x - s_t μ̅) ),
g(x,μ) := 1/2 ( x^⊤ q x + (x - sμ̅)^⊤q̅ (x - sμ̅) ),
(so the only difference is that we add the given parameter a̅_t).
For the dynamics, we take
b(t,x,μ,α) : = b_t x+b̅_t μ̅, σ(t,x,μ,α):= σ, σ^0(t,x,μ,α):= α.
The optimality condition then becomes
α^*(t,x,μ, p,X,Q):=
_α{1/2α^2 X + α Q + 1/2 |α-a̅_t|^2 }
= a̅_t-Q/1+X,
in the case that X>-1 and a minimizer exists. The compensated HJB becomes
- ∂_t^y û -
1/2 (σ^2 + ( a̅_t-∂_xyû/1+∂_xx^2 û )^2 ) ∂_xx^2 û
- a̅_t-∂_xyû/1+∂_xx^2 û∂_xyû - 1/2Δ_y û
- ∂_x û ( b_t x+b̅_t μ̅ ) = 1/2 ( |a̅_t-∂_xyû/1+∂_xx^2 û -a̅_t |^2 + x^⊤ q_t x + (x - s_t μ̅)^⊤q̅_t(x - s_t μ̅) ).
Now let us suppose we adopt a similar ansatz as before, namely,
û(t,x,ω,y) =
1/2 ( x Γ_t x + μ̅(t,ω, y) Γ^0_t μ̅(t,ω, y) +
2x Λ^0_t μ̅(t,ω, y) )
+ Δ_t,
so that the optimal feedback function has the lifted form
α̂^*(t,ω,y)=
a̅_t-Λ^0 ∂_y μ̅(t,ω,y)/1+Γ_t
But this expression is a bit problematic because the term “∂_y μ̅(t,ω,y)”
will likely involve the control itself.
To resolve this issue, let us search for the optimal control among
deterministic C^1 functions of time = (β_t)_.
Indeed, given such a function, the associated state dynamics will have the form
dX^β_t = ( b_t X^β_t +b̅_t μ̅^β_t) dt
+ σ dW_t
+ β_t dW^0_t.
As before, we can take expectations of
this equation conditional on
^^0_t to get
dμ̅^β_t = (b_t + b̅_t) μ̅^β_t dt + β_t dW^0_t.
And again, as before, the lifted functional of μ̅^β_t = μ̅^β(t,^0,W^0_t) can be solved explicitly as
μ̅^β(t,ω, y)
= Φ_t (μ̅_0
+ ∫_0^t Φ^-1_s [ (b_s + b̅_s) β_s - β̇_t ] ω_s ds )
+ β_t y,
where (Φ_t)_ is the solution of
Φ̇_t = (b_t + b̅_t)Φ_t, Φ_0 = 1.
From this last expression, we can then compute directly
∂_t^y μ̅^β(t,ω,y)= (b_t + b̅_t) μ̅^β(t,ω,y), ∂_y μ̅^β(t,ω,y) = β_t.
Hence, given a flow of measures = (μ_t)_ determined by a deterministic C^1 control = (β_t)_,
the optimal control will satisfy (now removing the dependence on ω,y)
α̂^*_t =
a̅_t-Λ^0 β_t /1+Γ_t
But the mean field game consistency condition suggests we will have β_t = α̂^*_t, resulting in a readily solved equation for α̂^*_t, namely,
α̂^*_t =
a̅_t-Λ^0 α̂^*_t /1+Γ_t, so that α̂^*_t = (1+Γ_t + Λ^0_t)^-1a̅_t.
In particular, the optimal control α̂^*_t is a deterministic function of time
and the lifted function “μ̅(t,ω,y)” appearing in the ansatz for û(t,x,ω,y) may be taken to satisfy:
∂_t^y μ̅(t,ω,y)= (b_t + b̅_t) μ̅(t,ω,y), ∂_y μ̅(t,ω,y) = α̂^*_t = (1+Γ_t + Λ^0_t)^-1a̅_t.
At last, we can plug all these considerations into the compensated HJB to get
- 1/2 ( x Γ̇_t x + μ̅(t,ω, y) Γ̇^0_t μ̅(t,ω, y) +
2x Λ̇^0_t μ̅(t,ω, y) )
- Δ̇_t
- Γ^0_t (b_t + b̅_t) μ̅^2(t,ω, y) -
Λ^0_t (b_t + b̅_t) xμ̅(t,ω, y)
- 1/2 (σ^2 + ( a̅_t/1+Γ_t + Λ^0_t )^2 ) Γ_t
- ( a̅_t/1+Γ_t + Λ^0_t )^2 Λ^0 - 1/2Γ^0_t ( a̅_t/1+Γ_t + Λ^0_t )^2
- ( Γ_t x + Λ^0_t μ̅(t,ω, y) ) ( b_t x+b̅_t μ̅ ) = 1/2 ( |a̅_t/1+Γ_t + Λ^0_t -a̅_t |^2 + x q_t x + (x - s_t μ̅) q̅_t(x - s_t μ̅) ) ,
with terminal condition
û(T,x,ω,y) = 1/2 ( x^⊤ q x + (x - sμ̅(t,ω,y))^⊤ q̅ (x - sμ̅(t,ω,y)) ).
This leads to the following system of ODEs (that can be solved in the order presented):
* |x|^2 : Γ̇_t = -2 b_t Γ_t - ( q_t + q̅_t ) , Γ_T = q+q̅,
* x μ̅_t : Λ̇_t^0 = -Λ^0_t b_t - Γ_t b̅_t - Λ^0_t(b_t + b̅_t) + s_t q̅_t, Λ^0_T = - s q̅,
* μ̅_t^2 : Γ̇^0_t = -2 Γ_t^0(b_t + b̅_t) - 2 Λ^0_t b̅_t - s_t q̅_t s_t , Γ^0_T = s q̅ s,
* 1 : Δ̇_t = - 1/2 ( a̅_t/1+Γ_t + Λ^0_t )^2 ( Γ_t + 2Λ^0_t + Γ_t^0 ) - 1/2 ( σ^2 + a̅_t^2) , Δ_T = 0.
§.§ Discussion of Solvability of Ricatti Equations
As the system of ODEs for Γ, Λ^0, and Γ^0 is linear, there always exists a unique solution. We require Γ_t>-1 in order for α^* to correspond to the minimum in the Hamiltonian. We then require Γ_t+Λ_t^0≠-1 so that there exists a fixed point. Both of these conditions hold in the case considered in Section <ref>, where we assume that q_t+q̅_t≥0, q+q̅≥ 0 and q_t+(1-s_t)q̅_t≥0, q+(1-s)q̅≥ 0, which implies that Γ≥ 0 and Λ^0≥ 0.
§.§ Comparison with the literature
As mentioned in the introduction, we do not know many references on mean field games
with control in the volatility
coefficient of the common noise except for the recent theoretical paper of Barasso-Touzi
<cit.>
and sporadic statements throughout Carmona-Delarue <cit.>.
However, we can still compare with an existing
explicitly solvable
model of controlled
volatility in a more
classical stochastic control setting.
For example,
Proposition 5.1 of Ankirchner-Fromm <cit.>
arrives at an optimal control that in our notation
would correspond to “a̅_t/1+Γ_t”.
It is interesting
that we instead arrive at a slightly
modified form “a̅_t/1+Γ_t + Λ^0_t,”
since the control is entangled with the additional unknown process “v_t(x)”, as is clear from the stochastic HJB equation (<ref>).
§ MEAN FIELD GAME WITH COMMON NOISE AND PARTIAL INFORMATION
Recall we formulated the mean field game problem with common noise and partial information
as Problem 4 of Section <ref> with as an independent Brownian motion with covariance Θ̃. We will refer to the probability measure where is an independent Brownian as .
We will now work with a probability measure , where ^0 is still a standard Brownian motion,
but now is an independent Brownian motion with covariance Θ̃.
For given measures μ, η∈(^d) and a function p(x) on ^d,
we define the optimal feedback function under partial information as
α(t,μ,η,p) := _α∈^d{∫_^nη(dξ) [f(t,ξ,μ,α)+ p(ξ) · b(t,ξ,μ,α) ] }.
Next, for given flows = (μ_t)_, η = (η_t)_ of probability measures in (^d)
and of functions = (p_t(x))_ on ^d,
let ^, η, = (X^, η,_t)_ denote the solution to
dX_t = b(t,x,μ_t,α(t,μ_t, η_t,p_t)) dt + σ dW_t + σ^0 dW^0_t.
Lastly, define
M_t^, η, := exp{Θ̃^-1∫_0^t h(s,X_s^, η,) dZ_s -1/2Θ̃^-1∫_0^t h(s,X_s^, η,) ds },
which is a martingale under .
Then define an equivalent probability measure ^, η, by
d ^, η, = M_t^, η, d, = (μ_t)_, η = (η_t)_, = (p_t(x))_.
We may now articulate the mean field game system with common noise and partial information:
Given any probability measure λ∈_2(^d),
find an ^^0,-adapted quintuple
(u_t(x),v_t(x),k_t(x), μ_t(x), η_t(x))_
of random fields on [0,T] ×^d satisfying the following system, consisting of a stochastic HJB equation coupled with a forward Kushner equation:
d_t u_t(x) = ( - 1/2 [ a ∂_xx^2 u_t(x) ] - ∂_x u_t(x) · b(t,x, α̂(t,μ_t,η_t,∂_x u_t),μ_t) - f(t,x,α̂(t,μ_t, η_t,∂_x u_t),μ_t) ) dt
+ ( v_t(x) ·σ^0 W^0_t - [ σ^0 ∂_x v_t(x) ]dt ) + k_t(x) ·Θ̃^-1 ( dZ_t - h(t,x,μ_t)dt )
d_t η_t(dx) = ( 1/2 [∂_xx^2 ( a η_t(dx) ) ] - div_x [η_t(dx) b(t,x, α̂(t,η_t,u_t),μ_t) ] )dt
- div_x [ η_t(dx) σ^0 dW^0_t ] + η_t(dx) ( h(t,x,μ_t) - h̅(t,μ_t) )^⊤Θ̃^-1 ( dZ_t - h̅(t,μ_t) dt )
u_T(x) = g(x,μ_T), η_0 = λ, μ_t(dx) = ^, η,∂_x u [ η_t(dx)| _t^^0], h̅(t,μ_t):= ∫_^d h(t,ξ,μ_t) η_t(dξ).
The relatively explicit fixed point condition “μ_t(dx) = ^, η,∂_x u [ η_t(dx)| _t^^0]” can be seen as a consequence of the so-called Kallianpur-Streibel formula, which realizes η_t as the conditional law of the state given _t^^0,.
It is considered part of the implicit consistency condition
required of the system (<ref>).
Indeed, compared with the concrete control formulation of Problem 4 in Section <ref>,
we are trading the implicit condition required of the partial information constraint on the controls for the fixed point condition required of the solution loop of the system (<ref>), just as Bensoussan-Yam <cit.>.
§.§ Zakai-Stratonovich equation
We now assume that h(t,x,μ_t) = h(x), i.e., independent of t and μ_t.
To begin, we first trade the
nonlinear forward Kushner equation of (<ref>)
for its unnormalized counterpart, the so-called Zakai-Stratonovich equation:
d_t q_t(x) = ( 1/2 [ ∂_xx^2 ( σσ^⊤ q_t(x) ) ] - div_x [q_t(x) b(t,x, α(t,μ_t,η_t,u_t),μ_t) ] - q_t(x) 1/2 h(x)^⊤Θ̃^-1 h(x) )dt
- div_x [ q_t(x) σ^0 ∘ dW^0_t ] + q_t(x)h(x)Θ̃^-1∘ dZ_t
q_0(x) ∈ L^1_+(^d), η_t(dx) := q_t(x)/∫_^d q_t(ξ) dξ dx,
where ∘ denotes Stratonovich integration.
The point of the flow transformation method is to remove the noises in the above equation
via a suitable change of variables, thus reducing its solution
to a more classical, albeit random, PDE.
Following Section 3.4.2 of Souganidis <cit.>,
we will look for a solution of the form q_t(x) = S(t)w_t(x),
where S(t) is the solution map of the linear equation
d_t 𝔭_t(x) = - div_x [ 𝔭_t(x) σ^0 ∘ dW^0_t ] + 𝔭_t(x) h(x)Θ̃^-1∘ dZ_t.
More explicitly, this solution map is given by
S(t)f(x) = f(x-σ^0W^0_t) exp ( ∫_0^t h(x+σ^0(W^0_s - W^0_t))Θ̃^-1∘ dZ_s )
and thus
S^-1(t)f(x) = f(x+σ^0W^0_t) exp ( - ∫_0^t h(x+σ^0W^0_s)Θ̃^-1∘ dZ_s )
Now define
F(X,p,u,x,t) := 1/2 [σσ^⊤ X ] - p · K_t(x) + u · ( div_x K_t(x) - 1/2 h(x)^⊤Θ̃^-1 h(x) )
where here we employ the generic notation K_t(x) := b(t,x,μ_t, α(t,μ_t,η_t,∂_x u_t)).
Then Section 3.4.2 of Souganidis <cit.> shows that w_t(x) is a solution of the random PDE
∂_t w_t(x) = S^-1(t) F(∂_xx^2 [S(t)w_t(x)] , ∂_x [S(t)w_t(x)] , S(t)w_t(x), x,t ).
We thus have a functional dependence of the form
w_t(x) = w(t,x,^0,) := w(t,x,(W^0_s)_0≤ s < t,(Z_s)_0≤ s < t);
in particular, this line of reasoning shows how w_t(x) can be expressed as a functional of the paths of the strict prior history of the noises.
We will write out this dependence more explicitly in the system (<ref>) below,
where we will expand out the equation (<ref>).
Now further assume h(x) = Hx. Then the normalized measure takes the form
n_t(x) =
q_t(x)/∫_^d q_t(ξ) dξ
=
w_t(x-σ^0W^0_t) exp (Hx · Z_t ) /∫_^d w_t(ξ-σ^0W^0_t)
exp ( Hξ· Z_t ) dξ,
where it is significant that the stochastic integrals
arising from the solution map S(t) of (<ref>)
have canceled out.
Indeed, now that these stochastic integrals are gone,
we can conclude that n_t(x) admits the lifted functional
representation n̂(t,x,^0,, W^0_t,Z_t),
where
n̂(t,x,ω,γ,y,z) =
w(t,x-σ^0y,ω,γ) exp (Hx · z ) /∫_^dw(t,ξ-σ^0y,ω,γ)
exp ( Hξ· z ) dξ
Altogether, we expect the lifted functional form of the solution quintuple of the system (<ref>) to be
u_t(x) = û(t,x,^0,,W^0_t,Z_t), v_t(x) = (∂_y û)(t,x,^0,,W^0_t,Z_t), k_t(x) = (∂_z û)(t,x,^0,,W^0_t,Z_t),
m_t(x) = m̂(t,x,^0,W^0_t) with m̂(t,x,ω,y) = r(t,x-σ^0 y, ω),
n_t(x) = n̂(t,x,^0,,W^0_t,Z_t) with n̂(t,x,ω,γ,y,z) = w(t,x-σ^0y,ω,γ) exp (Hx · z ) /∫_^dw(t,ξ-σ^0y,ω,γ)
exp ( Hξ· z ) dξ,
where the triple (û(t,x,ω,γ,y,z), r(t,x,ω), w(t,x,ω,γ)) solves the following system of equations:
-∂_tû(t,x,ω,γ,y,z) -1/2 ( [a ∂_xx^2 û(t,x,ω,γ,y,z) ] + Δ_y û(t,x,ω,γ,y,z) )
- [σ^0 ∂_x ∂_y û(t,x,ω,γ,y,z) ] - 1/2[ Θ̃ ∂_zz^2 û(t,x,ω,γ,y,z)] - (∂_zû)(t,x,ω,γ,y,z) · Hx
= f (t,x, m̂(t,·,ω,y), α̂(t,m̂(t,·,ω,y),n̂(t,·,ω,γ,y,z), ∂_x û(t,·,ω,γ,y,z)) ) + _ω^y û(t,x,ω,γ,y,z),
∂_t w(t,x,ω,γ) = 1/2[σσ^⊤∂_xx^2 w(t,x,ω,γ)] + [σσ^⊤γ_t^⊤ H^⊤Θ̃^-1∂_x w(t,x,ω,γ)]
- div_x [ w(t,x,ω,γ) · b(t,x+σ^0 ω_t,m̂,α̂)] - w(t,x,ω,γ)
b(t,x+σ^0 ω_t,m̂,α̂) · H^⊤Θ̃^-1γ_t
+ w(t,x,ω,γ) ( 1/2[σσ^⊤γ_t^⊤ H^⊤Θ̃^-1 H γ_t] - 1/2 (x+σ^0ω_t)^⊤ H^⊤Θ̃^-1 H (x+σ^0ω_t) ),
∂_t r(t,x,ω)
= 1/2 [ σσ^⊤∂_xx^2 r(t,x,ω) ] + div_x [r(t,x,ω) b̅(t,x+σ^0ω_t,ω,ω_t) ],
u(t,x,ω,γ,y,z) = g(x, m̂(t,·,ω,y)), w(0,x,ω,γ)=r(0,x,ω) = ℓ(x).
Here, the “α̂” is given by
α̂(t,m̂(t,·,ω,y),n̂(t,·,ω,γ,y,z), ∂_x û(t,·,ω,γ,y,z))
:= _α∈^d{∫_^nn̂(t,ξ,ω,γ,y,z) [f(t,ξ,α,m̂(t,·,ω,y))+ ∂_x û(t,ξ,ω,γ,y,z) · b(t,ξ,α,m̂(t,·,ω,y)) ] dξ}.
and
b(t,x,m̂,α̂) :=b(t,x,m̂(t,·,ω,y),α̂(t,x,m̂(t,·,ω,y),n̂(t,·,ω,γ,ω_t,γ_t), ∂_x û(t,·,ω,γ,ω_t,γ_t))).
b̅(t,x,ω,y):=
^m̂,n̂,∂_x û [
b(t,x,m̂(t,·,ω,y),α̂(t,x,m̂(t,·,ω,y),n̂(t,·,ω,,ω_t,Z_t), ∂_x û(t,·,ω,,ω_t,Z_t))) |_t^^0 ]
This last definition
indicates that, in contrast to (<ref>),
the system (<ref>) is not quite path-by-path
in the sense that it ostensibly
requires an average to determine m̂(t,x,ω,y).
Also, despite how involved this last expression might seem,
it is straightforward to compute the conditional drift
b̅(t,x,ω,y) in our linear-quadratic setting.
§.§ Linear-quadratic MFG with common noise and partial information
In the case of partial information, we can proceed
very similarly as in the case of full information in Section <ref>,
but with a few important modifications.
We now make the ansatz
u_t(x) =
1/2 ( x ·Σ_t x + μ̅_t ·Σ^0_t μ̅_t + η̅_t ·Σ^1_t η̅_t +
2x ·Λ^0_t μ̅_t + 2 x ·Λ^1_t η̅_t )
+ Δ_t
so that
∂_x u_t(x)
= Σ_t x + Λ^0_t μ̅_t + Λ^1_t η̅_t
and thus the control feedback of (<ref>) is given by
α̂(t,η_t,∂_x u_t) := - ∫_^d∂_x u_t(ξ) η_t(d ξ)
= - (Σ_t + Λ^1_t) η̅_t - Λ^0_t μ̅_t
Thus, we have
dX_t = (b_tX_t - (Σ_t+Λ^1_t)η̅_t + (b̅_t - Λ^0_t) μ̅_t ) dt + σ dW_t + σ^0 dW^0_t.
Recalling the definition (<ref>) of ^,η,∂_x u,
we then take the ^,η,∂_x u-conditional expectation given
^^0_t to get
dμ̅_t = (b_t + b̅_t -Σ_t - Λ^0_t -Λ^1_t) μ̅_t dt + σ^0 dW^0_t.
Letting L̂_t := (b_t + b̅_t -Σ_t - Λ^0_t -Λ^1_t),
the lifted functional of μ̅_t has the form
μ̅(t,ω, y)
= Φ_t (μ̅_0
+ ∫_0^t Φ_s^-1 L̂_s σ^0 ω_s ds )
+ σ^0 y,
where (Φ_t)_ is the solution of
Φ̇_t = L̂_t Φ_t, Φ_0 = 1.
The equation for η̅_t needs to be
derived by computing the first moment
directly from the forward Kushner equation in (<ref>), which gives
d_t η̅_t
= ( (b_t - (Σ_t + Λ^1_t) - Π_t H^⊤Θ̃^-1H ) η̅_t + (b̅_t - Λ^0_t) μ̅_t ) dt
+ σ^0 dW^0_t
+Π_t H^⊤Θ̃^-1 dZ_t
where we define the variance Π_t := ∫_^dξ^2 η_t(dξ) - η̅_t^2.
If the initial condition is Gaussian, then this quantity is deterministic and classically satisfies
Π̇_t = σσ^⊤ + σ^0 (σ^0)^⊤
+ b_t^⊤Π_t + Π_t b_t - Π_t H^⊤ Θ̃_t^-1 H Π_t.
This procedure of estimating the state with η̅_t = 𝔼[X_t|ℱ_t^𝐖^0,𝐙] (classically without the presence of the common noise)
is commonly known as the Kalman filter in a discrete time context or as the Kalman-Bucy filter in a continuous time context (see the seminal work
Kalman-Bucy <cit.>).
So relying on this strong consequence of the Gaussian assumption,
we can solve the resulting linear equation for η̅_t explicitly.
More precisely, let
L_t:= b_t - (Σ_t + Λ^1_t) -Π_t H^⊤ Θ̃^-1 H
and consider the solution (Ψ_t)_ of
Ψ̇_t = L_t Ψ_t, Ψ_0 = 1.
Then
η̅(t,x,ω, γ, y,z)
= Ψ_t ( μ̅_0 + ∫_0^t Ψ_s^-1 (b̅_s - Λ^0_s) μ̅(s,ω, ω_s) ds - ∫_0^t L_s Ψ_s^-1 σ^0 ω_s ds )
+ Ψ_t ( ∫_0^t ( L_s Ψ_s^-1Π_s + Ψ_s^-1Π̇_s ) H^⊤Θ̃^-1γ_s ds ) + σ^0 y + Π_t H^⊤Θ̃^-1 z
where Π̇_t is given by (<ref>).
The lifted value function is given by
û(t,x,ω,γ,y,z) =
1/2 x·Σ_t x + x·Λ^0_t μ̅(t,x,ω,y)+1/2 μ̅(t,x,ω,y) ·Σ^0_t μ̅(t,x,ω,y)
+ 1/2 η̅(t,x,ω,γ,y,z) ·Σ^1_t η̅(t,x,ω,γ,y,z) + x·Λ^1_t η̅(t,x,ω,γ,y,z) + Δ_t.
Note the compensated time derivative ∂_t^y,z := ∂_t + ^y,z_ω,γ of (<ref>) will now involve both path variables ω and γ.
Then the compensated HJB equation will take the form
- ∂_t^y,zû(t,x,ω,γ,y,z) -1/2 ( [a ∂_xx^2 û(t,x,ω,γ,y,z) ] + Δ_y û(t,x,ω,γ,y,z) )
- [σ^0 ∂_x ∂_y û(t,x,ω,γ,y,z) ] - 1/2[ Θ̃ ∂_zz^2 û(t,x,ω,γ,y,z)] - ∂_zû(t,x,ω,γ,y,z) · Hx
- ∂_x û(t,x,ω,γ,y,z) · ( b_t x + b̅_t μ̅(t,ω, y) + K̂(t,ω,γ,y,z) )
= 1/2 ( x· q_t x + (x - s_t μ̅(t,ω, y))·q̅_t (x - s_t μ̅(t,ω, y)) ) + 1/2 | K̂(t,ω,γ,y,z) |^2,
where
K̂(t,ω,γ,y,z) := -(Σ_t + Λ^1_t) η̅(t,ω,γ,y,z) - Λ^0_t μ̅(t,ω,y)
with terminal condition
û(T,x,ω,γ,y,z) = 1/2 ( x· q x + (x - sμ̅(T,ω, y))·q̅ (x - sμ̅(T,ω, y)) ).
Now we may begin computing
the terms appearing in the lifted functional
backward equation (<ref>).
We first compute
∂_t^y,zμ̅(t,ω, y)
=
(b_t + b̅_t - (Σ_t + Λ^1_t) - Λ^0_t) μ̅(t,ω, y)
∂_t^y,zη̅(t,ω,γ,y,z) = L_t η̅(t,ω,γ,y,z)
+ (b̅_t - Λ^0_t) μ̅(t,ω, y)
∂_y μ̅(t,ω, y) = ∂_y η̅(t,ω,γ,y,z)
= σ^0, ∂_z η̅(t,ω,γ,y,z)
= Π_t H^⊤Θ̃^-1
We then compute
∂_t^y,zû(t,x,ω,γ,y,z)
= 1/2 ( x ·Σ̇_t x + 2x ·Λ̇^0_t μ̅(t,ω, y) + μ̅(t,ω, y) ·Σ̇^0_t μ̅(t,ω, y) )
+ 1/2η̅(t,ω,γ,y,z)^⊤Σ̇^1_t η̅(t,ω,γ,y,z)) + x·Λ̇^1_t η̅(t,ω,γ,y,z) + ḋ_t
+ (b_t + b̅_t -(Σ_t + Λ^1_t) - Λ^0_t)μ̅(t,ω, y) · (
Λ^0_t x + Σ^0_t μ̅(t,ω, y) )
+ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t^⊤ x)· ( L_t η̅(t,ω,γ,y,z) + (b̅_t - Λ^0_t) μ̅(t,ω, y) )
and can further compute
∂_x û(t,x,ω,γ,y,z) =
Σ_t x
+ Λ^0_t μ̅(t,ω,y)+ Λ^1_t η̅(t,ω,γ,y,z), ∂_xx^2 û(t,x,ω,y) =
Σ_t
∂_y û(t,x,ω,γ,y,z)
= (σ^0)^⊤ ( Σ^0_t μ̅(t,ω,y) + Σ^1_t η̅(t,ω,γ,y,z)
+ (Λ^0_t + Λ^1_t)x ),
∂_yy^2 û(t,x,ω,γ,y,z)
= (σ^0)^⊤ (Σ^0_t+ Σ^1_t) σ^0,
∂_z û(t,x,ω,γ,y,z) = (Π_t H^⊤Θ̃^-1)^⊤ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t ^⊤ x) ,
∂_zz^2 û(t,x,ω,γ,y,z) = (Π_t H^⊤Θ̃^-1)^⊤Σ^1_t Π_t H^⊤Θ̃^-1
∂_x ∂_y û(t,x,ω,γ,y,z)
=
(σ^0)^⊤ (Λ^0_t+Λ^1_t).
Inputting these calculations in the compensated equation gives
1/2 ( x ·Σ̇_t x + 2x ·Λ̇^0_t μ̅(t,ω, y) + μ̅(t,ω, y) ·Σ̇^0_t μ̅(t,ω, y) )
+ 1/2η̅(t,ω,γ,y,z)·Σ̇^1_t η̅(t,ω,γ,y,z)) + x·Λ̇^1_t η̅(t,ω,γ,y,z) + ḋ_t
+ (b_t + b̅_t -Σ_t -Λ_t^1- Λ^0_t)μ̅(t,ω, y) · (
Λ^0_t x + Σ^0_t μ̅(t,ω, y) )
+ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t^⊤ x) · ( L_t η̅(t,ω,γ,y,z) + (b̅_t - Λ^0_t) μ̅(t,ω, y) )
+1/2[(σσ^⊤ + σ^0 (σ^0)^⊤) Σ_t ] + 1/2[σ^0 (σ^0)^⊤ (Σ^0_t+ Σ^1_t)] + [σ^0 (σ^0)^⊤ (Λ^0_t+Λ^1_t) ] + 1/2[ Π_t H^⊤ H Π_t^⊤Θ̃^-1Σ^1_t ]
+ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t^⊤ x )·Π_t H^⊤Θ̃^-1 H x
+ ( Σ_t x
+ Λ^0_t μ̅(t,ω,y)+ Λ^1_t η̅(t,ω,γ,y,z) ) · ( b_t x + (b̅_t- Λ^0_t) μ̅(t,ω, y) - (Σ_t + Λ^1_t) η̅(t,ω,γ,y,z) )
+ 1/2 ( x· q_t x + (x - s_t μ̅(t,ω, y))·q̅_̅t̅ (x - s_t μ̅(t,ω, y)) ) + 1/2 | (Σ_t + Λ^1_t)η̅(t,ω,γ,y,z) + Λ^0_t μ̅(t,ω,y) |^2 = 0.
We now collect terms (symmetrizing for the squared terms) to arrive at the following closed system of Riccati equations (note we anticipate the coefficient of “μ̅η̅” is 0):
* |x|^2 : Σ̇_t = - Σ_t^⊤ b_t - b_t^⊤Σ_t - ( q_t + q̅_t ) + Λ^1_t Π_t H^⊤ Θ̃^-1 H+ H^⊤ Θ̃^-1 H Π_t Λ_t^1^⊤ , Σ_T = q+q̅
* x μ̅ : Λ̇_t^0 = -Λ^0_t^⊤ (b_t + b̅_t -Σ_t-Λ_t^1-Λ^0_t ) -Λ^1_t (b̅_t - Λ^0_t)-Σ_t^⊤ (b̅_t - Λ^0_t)- b_t^⊤Λ^0_t + q̅_t s_t ,
Λ^0_T = - q̅ s
* μ̅^2 : Σ̇^0_t = -( b_t + b̅_t-Σ_t -Λ_t^1-Λ^0_t )^⊤Σ^0_t-Σ_t^0^⊤ ( b_t + b̅_t-Σ_t-Λ_t^1 -Λ^0_t ) - Λ^0_t^⊤ ( b̅_t-Λ^0_t)
-( b̅_t-Λ^0_t)^⊤Λ^0_t-Λ_t^0^⊤ Λ_t^0- s_t^⊤q̅_t s_t, Σ^0_t = s^⊤q̅ s
* 1 : Δ̇_t = - 1/2[(σσ^⊤ + σ^0 (σ^0)^⊤) Σ_t ] - 1/2[ σ^0 (σ^0)^⊤ (Σ^0_t + Σ^1_t) ]
- [σ^0 (σ^0)^⊤ (Λ^0_t + Λ^1_t)]
- 1/2[ Π_t H^⊤ H Π_t^⊤Θ̃^-1Σ^1_t ] , Δ_T = 0
* η̅^2 : Σ̇^1_t = - ( L_t^⊤Σ^1_t + (Σ^1_t)^⊤ L_t ) + ( (Σ_t + Λ^1_t)^⊤Λ^1_t + ( Λ^1_t)^⊤ (Σ_t + Λ^1_t) ) - (Σ_t + Λ^1_t)^⊤ (Σ_t + Λ^1_t),
Σ^1_T = 0
* x η̅ : Λ̇^1_t = - Λ^1_t L_t - H^⊤ Θ̃^-1 H Π_t^⊤ Σ^1_t^⊤ + Σ_t^⊤ (Σ_t + Λ^1_t) - b_t^⊤Λ^1_t, Λ^1_T = 0,
* μ̅η̅ : 0=-Σ^1_t (b̅_̅t̅ - Λ^0_t) - Λ^1_t (b̅_t - Λ^0_t) + Σ_t Λ^0_t - Σ_t Λ^0_t + (Σ^1_t + Λ^1_t) (b̅_̅t̅ - Λ^0_t).
§.§ Discussion of Solvability of the Ricatti Equations
Notice that the equations for Σ_t, Λ^0_t are quadratic
Ricatti equations, while the equation for Σ^0_t
is linear.
Now observe that
L_t+ Π_t H^⊤Θ̃^-1 H = b_t - (Σ_t + Λ^1_t).
Hence, if we add the Σ_t^1 and Λ^1_t^⊤
equations together, we see that Υ_t := Σ_t^1 + Λ^1_t^⊤ solves
Υ̇_t = - L_t^⊤Υ_t - Σ^1_t^⊤(b_t - (Σ_t + Λ^1_t)) + (Σ_t + Λ^1_t)^⊤ Σ_t - Λ^1_t^⊤ b_t
+ ( (Σ_t + Λ^1_t)^⊤Λ^1_t + ( Λ^1_t)^⊤ (Σ_t + Λ^1_t) ) - (Σ_t + Λ^1_t)^⊤ (Σ_t + Λ^1_t)
= - L_t^⊤Υ_t + Υ_t (Σ_t + Λ^1_t) -Υ_t b_t.
Since the terminal condition is Υ_T = Σ^1_T + Λ^1_T^⊤ = 0, we have Υ_t ≡ 0
and thus Σ^1_t = - (Λ^1_t)^⊤ = -Λ_t^1.
Note this implies that the coefficient
for the μ̅η̅ term disappears,
thus reducing the problem to the case of the first six
equations.
To solve these equations, we can look at Γ_t := Σ_t - Σ^1_t
and sum together the first and sixth equation
along with Λ^1_t = -Σ^1_t and the definition of L_t to get
Γ̇_t = Γ_t^⊤Γ_t - b_t^⊤Γ_t - Γ_t^⊤ b_t - (q_t+q̅_t), Γ_T = q+q̅.
which is the same quadratic Ricatti equation as
for “Γ_t” in the case of full information
in Section <ref>.
Similarly, the equation for Λ_t^0 can be rewritten as:
Λ̇_t^0 = -Λ^0_t^⊤ (b_t + b̅_t -Σ_t-Λ_t^1-Λ^0_t ) -Λ^1_t (b̅_t - Λ^0_t)-Σ_t^⊤ (b̅_t - Λ^0_t)- b_t^⊤Λ^0_t + q̅_t s_t
= (Λ^0_t)^⊤Λ^0_t -(Λ^0_t)^⊤ (b_t + b̅_t - Γ_t) + Γ_t^⊤Λ^0_t - Γ_t^⊤ b̅_t - b_t^⊤ Λ_t^0 + q̅_t s_t,
which is the same quadratic Ricatti equation as
for “Λ^0_t” in the case of full information
in Section <ref>.
Now, following the approach in Section <ref>, we can consider Λ̃_t = Γ_t + Λ_t^0, which satisfies:
Λ̇̃̇_t = Λ̃_t^⊤Λ̃_̃t̃ - b_t^⊤ Λ̃_t - Λ̃_t^⊤ (b_t+b̅_t) - (q_t+q̅_t-q̅_t s_t).
Once again, under the conditions that q_t+q̅_t-q̅_t s_t, q+q̅-q̅ s are symmetric and positive semidefinite and that b̅_t is a scalar times the identity, we see that Λ̃_t is also symmetric and positive semidefinite and thus a unique global solution exists.
Finally and most importantly,
the above manipulations embody the separation principle:
to go from the optimal feedback function (<ref>) in the case of full information
in Section <ref> to the case of partial information here,
one just needs to replace the state with the best guess of the state given
the common noise ^0 and the
partial observation .
This is exactly what we have just established.
[Separation Principle]
The optimal feedback control for a mean field game with common noise and a partial information constraint in the linear-quadratic framework with Gaussian initial condition has the linear feedback form
α̂(t,η_t,∂_x u_t) = - ∫_^d∂_x u_t(ξ) η_t(d ξ) = -Γ_t η̅_t - Λ^0_t μ̅_t,
where the coefficients Γ_t and Λ^0_t
satisfy the same equations as for the optimal feedback function
α^*(t,x) := -Γ_t x - Λ^0_t μ̅_t
in the case of full information.[We remind the reader that we also checked the consistency of these equations with the literature at the end of Section <ref>.]
Thus, the optimal control is determined “separately” from the partial observation in that
the latter only enters in the former through the conditional expectation η̅_t = 𝔼[X_t|ℱ_t^𝐖^0,𝐙], which solves the so-called Kalman filtering problem (again, see the seminal work <cit.> of Kalman-Bucy).
§.§.§ Comparison with the literature
The recent article <cit.> of Bensousson-Yam has demonstrated a close connection between the partial observation control problem and mean field theory; more precisely,
they use a master equation approach for the linear quadratic partial information problem without mean field interactions. This approach allows them to prove a separation principle, i.e., that the optimal control is a linear feedback of the expected state given the observation process, but without requiring the standard simplifying assumption that the initial distribution is Gaussian (a significant assumption that our calculations of Section <ref> notably rely on).
It is noted that the complications from non-Gaussian initial conditions only arise in the Kalman filter equations to determine the distribution conditioned on the observations, whereas the fact one arrives at a linear feedback control (<ref>) should not change.
In the context of our paper, the Kalman filter corresponds to the mean flow (η̅_t)_
of the solution η = (η_t)_
to the forward Kushner equation in (<ref>).
In the Gaussian case with linear-quadratic data,
when computing the covariance Π_t from the Kushner equation,
a term involving the third moment naturally arises, but this can
be expressed in terms of the second moment,
thus leading to the deterministic Ricatti equation (<ref>) for the covariance Π_t.
It is not yet clear to the authors whether the approach of <cit.> could be adapted
to the mean field game problem with partial information to generalize the solution outside of the case of Gaussian initial conditions.
§ DERIVATION, DIFFICULTIES, AND SOME CALCULATIONS
This optional appendix
first sketches how we derived the lifted functional approach. We then turn to some difficulties the reader might want to keep in mind
when pursuing this perspective.
Finally, we close with some enlightening
calculations involving the compensator based on the Fréchet derivative.
§.§.§ Derivation of the lifted functional approach
For the sake of simplicity, we take σ = σ^0 = I_d
and a quadratic Hamiltonian H(t,x,p) := 1/2|p|^2.
We then perform the change of variables x ↦ x-y
so that we can reduce the form of the lifted functional system (<ref>) to
finding a pair (u(t,x,ω,y), r(t,x,ω))
satisfying
-∂_t u(t,x,ω,y) - 1/2 ( Δ_x + Δ_y ) u(t,x,ω,y) + 1/2 | ∂_x u(t,x,ω,y)|^2 = f(t,x,y,r(t,·, ω)) + _ω^y u(t,x,ω,y),
∂_t r(t,x,ω)
= 1/2Δ_x r(t,x,ω) + div_x [r(t,x,ω)∂_x u(t,x,ω,ω_t) ],
u(T,x,ω,y) = g(x,y,r(T,·,ω)), r(0,x,ω) = ℓ(x),
where
f(t,x,y,ρ):= f(t,x+y,ρ(· - y)), g(x,y,ρ):= g(x+y,ρ(· - y)).
To make explicit the connection, once we find a solution to (<ref>),
we immediately recover a solution to the original system (<ref>)
by setting
û(t,x,ω,y) := u(t,x-y,ω,y), m̂(t,x,ω,y):= r(t,x-y,ω).
Now let = (B_t)_ be a d-dimensional Brownian motion independent
of .
Also, given
a path ω∈Ω,
we write
W^t,ω,y_s := ω_s 1_[0,t)(s) + [y + W_s - W_t ] 1_[t,T](s).
We consider the candidate solution of the compensated backward HJB of (<ref>) given by the BSDE representation
u(t,x,ω,y) := Y^t,x,ω,y_t, (t,x,ω,y) ∈ [0,T] ×^d
×Ω×^d,
where for each (t,x,ω,y) ∈ [0,T) ×^d ×Ω×^d, the
triple (Y^t,x,ω,y_s, (Z^t,x,ω,y_s, Γ_s^t,x,ω,y))_t ≤ s≤ T
satisfies the “lifted” BSDE (see Peng <cit.>)
Y^t,x,ω,y_s = g(B^t,x_T, W^t,y_T, r(T,·, ^t,ω, y ) )
+ ∫_s^T f(θ,B^t,x_θ, W^t,y_θ, r(θ, ·, ^t,ω, y ) ) dθ
- 1/2∫_s^T |Z^t,x,ω,y_θ|^2 dθ - ∫_s^T [Z^t,x,ω,y_θ· dB_θ + Γ_θ^t,x,ω,y· dW_θ ].
We now sketch how to go
from the candidate solution u(t,x,ω,y) := Y^t,x,ω,y_t
as in (<ref>) to the form of compensated backward HJB
of the system (<ref>).
First, it is readily seen that the concatenated path W^t,ω,y_· still satisfies the flow property:
W^s,W^t,ω,y_·,W^t,ω,y_s_r = W^t,ω,y_r, for t ≤ s ≤ r ≤ T.
This in turn will ensure we have
a corresponding flow property at the level of the BSDE:
u(t+ϵ,B^t,x_t+ϵ,W^t,ω,y,W^t,y_t+ϵ) = Y^t,x,ω,y_t+ϵ.
Arguing as in Theorem 3.2 of Pardoux-Peng <cit.>, this leads us to consider
the decomposition
u(t+ϵ,x,ω,y) - u(t,x,ω,y) = [u(t+ϵ,B^t,x_t+ϵ,W^t,ω,y,W^t,y_t+ϵ) - u(t,x,ω,y) ] (1st difference)
+ [u(t+ϵ,x,ω,y) - u(t+ϵ,B^t,x_t+ϵ,ω, W^t,y_t+ϵ)) ] (2nd difference)
+ [u(t+ϵ,B^t,x_t+ϵ,ω, W^t,y_t+ϵ)) - u(t+ϵ,B^t,x_t+ϵ,W^t,ω,y, W^t,y_t+ϵ) ] (3rd difference)
The first difference in (<ref>) can be expressed in terms of the BSDE by the flow property (<ref>), and thus upon dividing by ϵ>0, taking expectations, and letting ϵ→ 0,
it will contribute the term “1/2| ∂_x u(t,x,ω)|^2 - f(t,x,y,r(t,·, ω))”.
Next, by an application of the completely classical Itô formula, the second
difference in (<ref>) will contribute “-1/2(Δ_x + Δ_y)u(t,x,ω,y)”.
Finally, for the third difference of (<ref>), write X(ϵ):= B^t,x_t+ϵ, Y(ϵ):= W^t,y_t+ϵ, and
H^t,ω,y_s(ϵ) := [y-ω_s+W_s - W_t]1_[t,t+ϵ)(s).
Then the third difference of (<ref>) can be written as
u(t+ϵ,X(ϵ),ω, Y(ϵ)) - u(t+ϵ,X(ϵ),ω+H^t,ω,y(ϵ), Y(ϵ))
where X(ϵ),Y(ϵ) → x,y as ϵ→ 0. Hence, up to stochastic arguments that
will not contribute given suitable joint regularity, we identify the compensator (<ref>) as the limit of the final difference in (<ref>):
lim_ϵ→ 0ϵ^-1 [ u(t+ϵ,X(ϵ),ω+H^t,ω,y(ϵ), Y(ϵ)) - u(t+ϵ,X(ϵ),ω, Y(ϵ)) ] = _ω^y u(t,x,ω,y).
§.§.§ Some difficulties with the lifted functional approach
There are a few issues to deal with that the reader should keep in mind when adopting this perspective:
* The property of being a lifted functional is not a closed condition; for example, consider
ψ̂_ϵ(t,ω,y):= ϵ^-1∫_t-ϵ^t g(ω_s) ds + h(y)
Then as ϵ↓ 0, we have ψ̂_ϵ(t,ω,y) → g(ω_t) + h(y), which no longer separates the present value from the strict prior history.
* The compensated HJB method is not valid for functional data that is too sensitive to a jump nearby a fixed time. For example, consider the path-dependent heat equation (see Cosso-Russo <cit.> for the definition of the vertical ∂_ω^V and horizontal ∂_t^H path dependent derivatives):
-∂_t^H u(t,ω) - 1/2∂_ωω^V u(t,ω) = 0
u(T,ω) = G(ω).
This equation admits the candidate[See Chapter 11 of Zhang <cit.> or Cosso-Russo <cit.> (and references therein) for details on realizing this expression as a viscosity solution of a path dependent PDE.] solution u(t,ω) = [ G(^t,ω)], where
W^t,ω_s := ω_s 1_[0,t)(s) + [ω_t + W_s - W_t] 1_[t,T](s).
Now suppose the terminal condition G(ω) admits the lifted functional form G(ω) = Ĝ(ω,ω_t).
Then we would like to say that û(t,ω,y) := G(^t,ω,y) is a solution of
the compensated heat equation
-∂_t û(t,ω,y) - 1/2∂_yyû(t,ω,y) = _ω^y û(t,ω,y)
û(T,ω,y) = Ĝ(ω,y).
But this is not always true. Indeed, the choice G(ω) = Ĝ(ω,ω_T) = sup_0≤ s < T |ω_s| ∨ |ω_T| provides a counterexample.
Although one can show û(t,ω,y) := G(^t,ω,y) satisfies a certain classical heat equation (see Section 3.2 of Cosso-Russo <cit.> or Example 11.1.2(iii) of Zhang <cit.>),
the uniform metric is very sensitive to a jump nearby a fixed time, so one cannot
compute the compensator _ω^y û(t,ω,y). Thus, û(t,ω,y) cannot be realized as a solution of a compensated heat equation.
However, Laplace's principle allows us to approximate the uniform metric as
sup_0≤ t ≤ T |ω_s| = lim_N →∞ N^-1log∫_0^T e^Nω_sds,
where each approximating terminal data G_N(ω):= N^-1log∫_0^T e^Nω_sds is not too sensitive jumps.
One can show that the candidates û_N(t,ω,y) := G_N(^t,ω,y) are C^2 solutions to compensated heat equations that converge to the viscosity solution û(t,ω,y) := sup_0≤ s ≤ T | W^t,ω,y_s| of the path-dependent heat equation with terminal data G(ω) = sup_0≤ s ≤ T |ω_s|.
* To expand on the previous point, the compensator _ω^y of (<ref>) seems to require leaving the framework of continuous paths.
In fact, equivalent formulas based on the Frechet derivative for the compensator
_ω^y of even basic functionals naturally involve evaluating on paths that are either left or right continuous (or even neither! See the expression (<ref>) below).
Hence, there are at least a few reasons that one
may want to avoid working on the Skorokhod space of right continuous paths with left limits, in contrast to much of the literature on functional Itô formula and path dependent PDEs (though there are notable exceptions, like Cosso-Russo <cit.> and Zhang <cit.>).
Fortunately, one can adapt and extend the seminorm topology of Section 2.2 from Cosso-Russo <cit.> to our setting, which
formalizes the notion of a path dependent functional being “not too sensitive to a possible jump nearby any given fixed time t.”
Fix t ∈ [0,T]. Then for each fixed M>0,
consider the space _t,M([0,T];^d)
of paths bounded by M and continuous on [0,T] except for
possibly a jump at time t.
Endow _t,M([0,T];^d) with the topology
associated to the metric[Note here we use a more standard looking metric since the restriction to bounded paths allows us to avoid the arguably more abstract Frechet-type metric construction “∑_k=1^∞ 2^-k[ω-η]_t,k/1+[ω-η]_t,k,” which does not appear as good for checking estimates. ]
d_t(ω,η):= ∑_k=1^∞ 2^-k [ω-η]_t,k,
induced by an increasing countable family of seminorms of the form
[ω]_t,k := sup_0≤ s ≤ t-2^-k |ω_s| + |ω_t| + sup_t+2^-k≤ s ≤ T |ω_s|.
Then finally,
consider the space _t([0,T];^d):= ∪_M>0_t,M([0,T];^d) endowed
with the smallest topology such that all the inclusions _t,M([0,T];^d) ↪_t([0,T];^d) are continuous.[We remark that this “inductive topology” on _t([0,T];^d) is not metrizable.]
More concretely, η^N converges to η in _t([0,T];^d)
if there is an M>0 such that ‖η^N ‖_∞≤ M for all N, and for all k ≥ 1, [η^N - η]_t,k→ 0 as N →∞; in particular, sequences cannot form arbitrarily large jumps near the given time t, but are allowed to form a double jump at time t in the limit (which occurs naturally in (<ref>) below).
In summary,
to rigorize the definition of compensator (<ref>),
we can restrict to strictly non-anticipative functionals ψ(t, ω) of continuous paths ω∈Ω that are not too sensitive to a possible formation of a jump nearby any given time s ∈ [0,t].
Despite ψ(t, ω) only being defined on continuous paths Ω,
such functionals admit a unique continuous extension
to each _s,M([0,T];^d) for any M>0, and thus to _s([0,T];^d), for any s ∈ [0,t].
This stronger continuity assumption for functionals ϕ(t,ω) of continuous paths can also be shown to be compatible with the general Arzela-Ascoli criterion (Theorem 47.1 of Munkres <cit.>), which should be
convenient for a possible fixed point argument
for the main lifted functional mean field game system (<ref>).
§.§ Some compensator calculations with the Fréchet derivative
Suppose G(t,ω) is an ^d-valued non-anticipative functional
on [0,T] ×Ω, so for each t ∈ [0,T],
ω↦ G(t,ω) can be thought of as a function on C_0([0,t];^d).
Fix t ∈ [0,T].
We denote the Fréchet derivative of ω↦ G(t,ω)
by D_ω G(t,ω), which is an ^d-valued signed Radon
measure on [0,t], so for any η∈ C_0([0,t];^d),
⟨ D_ω G(t,ω), η⟩ = ∫_0^t η_s D_ω G(t,ω)(ds).
We write D_ω^t G(t,ω) := D_ω G(t,ω)({t}) δ_{t }
and D^⊥_ω G(t,ω) := D_ω G(t,ω) - D_ω^t G(t,ω)
to get the Lebesgue decomposition
D_ω G(t,ω) = D^⊥_ω G(t,ω) + D_ω^t G(t,ω)
Now if ω↦ G(t,ω) is continuous with respect
to the seminorm topology determined by d_t of (<ref>)
so it admits a unique extension to _t([0,T];^d),
then we can define its lifting by
Ĝ(t,ω,y):= G(t,ω+[y-ω_t] 1_{t}).
If y ↦Ĝ(t,ω,y) is differentiable,
then D_ω^t G(t,ω) = ∂_y Ĝ(t,ω,y) δ_{t }.
If D^⊥_ω G(t,ω) is absolutely continuous
with respect to Lebesgue measure,
then we write its density as δ_ω G(t,ω)(r), r ∈ [0,t].
Supposing ω↦δ_ω G(t,ω)(r)
is also continuous with respect
to the seminorm topology determined by d_t of (<ref>),
we also write δ_ωĜ(t,ω,y)(r) = δ_ω G(t,ω+[y-ω_t] 1_{t})(r).
Putting everything together, we have
D_ω G(t,ω)(ds) = δ_ωĜ(t,ω,ω_t)(s) ds
+ ∂_y Ĝ(t,ω,ω_t) δ_{t }(ds).
Finally,
suppose G(t,ω) is strictly non-anticipative
and that for any 0 ≤ t ≤ s ≤ T,
both ω↦ G(s,ω) and ω↦δ_ω G(s,ω)(s)
are continuous with respect to d_t.
Then we can formally compute, for every t ∈ [0,T) and s ∈ [t,T],
the compensator (<ref>) of Ĝ(s,ω,y) as
_ω^y G(s,W^t,ω,y) = ∫_0^1 δ_ω G(s,W^t+,ω,y + θ [y-ω_t] 1_{t} )(t) · [y-ω_t] dθ
= ∫_0^1 δ_ω G(s,(1-θ) W^t+,ω,y + θ W^t,ω,y )(t) · [y-ω_t] dθ,
where W^t, ω, y_s was
defined in (<ref>) while its left-continuous version W^t+,ω,y
is defined as
W^t+,ω,y_s := ω_s 1_[0,t](s) + [y + W_s - W_t ] 1_(t,T](s).
As a prototype example, consider G(s,ω) = ∫_0^s F̂(r,ω,ω_r) dr,
where F̂(t,ω,y) is a lifted functional on [0,T] ×Ω×^d.
Then one can compute for 0 ≤ℓ≤ s,
δ_ω G(s,ω)(ℓ) = ∫_ℓ^s δ_ωF̂(r,ω,ω_r)(ℓ) dr
+ (∂_y F̂)(ℓ,ω,ω_ℓ).
For example, if F̂(t,ω,ω_t) := ∫_0^t h(ω_r)dr + g(ω_t),
then by combining the formula (<ref>) with the calculation
(<ref>), the compensator takes on the form
_ω^y G(s,W^t,ω,y) = (s-t) [h(y) - h(ω_t)] + g(y) - g(ω_t).
§.§ Acknowledgments
The first author would like to thank many people: Daniel Lacker, for helping identify a critical error in an early reference, which, in order to fix, led to the discovery of the need for the compensator;
Andrea Cosso and Francesco Russo, for many helpful correspondences; Nizar Touzi, for pointing out useful references;
Nikiforos Mimikos-Stamatopoulos, for regular discussions of technical concepts; and finally and most importantly,
Takis Souganidis, who helped guide
the lifted functional perspective from its inception as well as provide many critical suggestions for this paper.
plain
|
http://arxiv.org/abs/2306.11682v1
|
20230620165851
|
SkyGPT: Probabilistic Short-term Solar Forecasting Using Synthetic Sky Videos from Physics-constrained VideoGPT
|
[
"Yuhao Nie",
"Eric Zelikman",
"Andea Scott",
"Quentin Paletta",
"Adam Brandt"
] |
cs.CV
|
[
"cs.CV"
] |
[
J-F. Milette
July 31, 2023
=================
In recent years, deep learning-based solar forecasting using all-sky images has emerged as a promising approach for alleviating uncertainty in PV power generation. However, the stochastic nature of cloud movement remains a major challenge for accurate and reliable solar forecasting. With the recent advances in generative artificial intelligence, the synthesis of visually plausible yet diversified sky videos has potential for aiding in forecasts. In this study, we introduce SkyGPT, a physics-informed stochastic video prediction model that is able to generate multiple possible future images of the sky with diverse cloud motion patterns, by using past sky image sequences as input. Extensive experiments and comparison with benchmark video prediction models demonstrate the effectiveness of the proposed model in capturing cloud dynamics and generating future sky images with high realism and diversity. Furthermore, we feed the generated future sky images from the video prediction models for 15-minute-ahead probabilistic solar forecasting for a 30-kW roof-top PV system, and compare it with an end-to-end deep learning baseline model SUNSET and a smart persistence model. Better PV output prediction reliability and sharpness is observed by using the predicted sky images generated with SkyGPT compared with other benchmark models, achieving a continuous ranked probability score (CRPS) of 2.81 (13% better than SUNSET and 23% better than smart persistence) and a Winkler score of 26.70 for the test set. Although an arbitrary number of futures can be generated from a historical sky image sequence, the results suggest that 10 future scenarios is a good choice that balances probabilistic solar forecasting performance and computational cost.
§ INTRODUCTION
Renewable energy sources, such as solar photovoltaics (PV), will be the key component for future power systems <cit.>. One of the challenges for large-scale PV integration is unstable power generation. PV output can greatly fluctuate on short time horizons due to cloud passage events <cit.>. Although this variability can be filled in by dispatchable resources, such as gas turbines, in the current grid, it will become a critical issue as transition to a high-renewable energy system continues. Accurate short-term solar forecasting systems are therefore needed to reduce the uncertainty in PV power generation, help grid operators optimize assets and dispatch priorities, and minimize mitigation costs (e.g., batteries).
Ground-based sky images have emerged as a promising approach to observe the surrounding cloud cover for short-term[Although as yet there is no common agreement on the classification criterion, we use the definition of forecasting horizon less than 30 minutes in this study for short-term solar forecasting <cit.>.] solar forecasting <cit.> due to their high temporal (from seconds to minutes) and spatial resolution (<1x kms) <cit.>. Satellite imagery and numerical weather prediction, on the other hand, with coarse temporal (from minutes to 10x hours for satellite, from minutes up to 1000x hours for NWP) and spatial resolution (1x∼100x kms) <cit.>, fit better for medium- to long-term forecasting at a scale of a few hours to a day ahead.
Traditional image-based forecasting methods focused on handcrafted feature engineering of sky images. Extracted features, such as red-blue ratio, cloud coverage, and cloud motion vectors, are used for building physical deterministic models <cit.> or training machine learning models <cit.>. In the past five years, with the development of computer vision techniques, efforts have shifted to build end-to-end deep learning models that correlate the future PV ouput (or solar irradiance) with the corresponding historical sky image sequence as well as other auxiliary input such as the past PV output (or solar irradiance), sun angles and/or weather data. These deep learning models often rely on convolutional neural networks (CNNs), either using CNNs solely <cit.> or combining CNNs with recurrent neural networks (RNNs), like long short-term memory (LSTM) <cit.>.
Existing deep learning-based solar forecasting models often suffer from temporal lags in prediction <cit.>, especially on cloudy days when the power output of a PV system can drastically change within a short time (so-called ramp events). This indicates that the models tend to rely on past observations to make predictions and that cloud dynamics are not well captured. Predicting cloud motion is very challenging as clouds can continuously deform and condense or evaporate during movement <cit.>. These stochastic behaviors of clouds are the main cause of uncertainty of PV panel output. However, the complex physics involved in cloud propagation is hard to learn via the end-to-end training of solar forecasting models. One approach is to use a dedicated cloud motion prediction model to better capture the cloud dynamics. Good cloud motion predictions should not only capture the general trend of cloud movement, but also account for the stochasticity. Traditional image-based motion prediction methods, such as particle image velocimetry <cit.> and optical flow <cit.>, which conduct patch matching by assuming linear propagation of objects, are challenged by the complexity of clouds.
Another issue with the current solar forecasting research is that prediction uncertainty is seldom quantified. Most existing studies focus on deterministic prediction <cit.>, i.e., predicting a single value of either future PV power output or solar irradiance. Limited studies have investigated probabilistic solar power forecasting, i.e., generating a range prediction covering the uncertainty of future power generation, which is more valuable for grid risk management. Existing probabilistic solar forecasting efforts include, for example, using bootstrap sampling for training multiple artificial neural network models <cit.>, training neural networks to predict lower and upper bound of PV power to generate prediction intervals <cit.>, natural gradient boosting methods with posthoc calibration <cit.>, and discretizing the target space into bins (e.g., discretizing irradiance with N equally spaced bins from 0 to 1300 W/m^2) and predicting the probability of future values falling in each bin <cit.>. However, very little work has explored estimating the uncertainty in power generation based on the realizations of different possible future sky conditions due to the challenges in capturing the stochastic cloud dynamics. =-1
Recent improvements in generative artificial intelligence, specifically the advances in image and video synthesis, has provided opportunities for tackling these challenges. Future sky image frames can be generated given a set of past sky image frames as input, based on the underlying cloud motion patterns learned by the video prediction model during training. By leveraging the recent advances in video prediction, we propose in this study a two-stage deep learning-based probabilistic solar forecasting system. This forecasting system uses a physics-constrained stochastic sky video predictor that is capable of generating a range of possible future sky videos from the same historical sky image sequence, followed by a CNN-based PV output predictor trained to generate a range of predictions of the future PV output based on the predicted future images. Our contributions are summarized as follows:
* We developed a specialized stochastic video prediction framework that can capture the physical dynamics in the context of sky videos and generate visually plausible yet remarkably diversified videos of the future sky.
* We qualitatively and quantitatively examined and compared different benchmark deep video prediction models for generating future sky videos.
* We applied the predicted frames for probabilistic solar forecasting tasks and showed the promise of using these synthesized images for uncertainty estimation in PV output prediction.
The rest of this paper is organized as follows: Section <ref> provides an overview of deep video prediction models.
Section <ref> presents the proposed methodology, including the model architectures, training details, a baseline model for comparison, and evaluation metrics. Section <ref> describes the dataset used for the experiments, and Section <ref> analyzes the results for both video prediction and probabilistic solar forecasting. Section <ref> discusses some limitations of this study and provides directions for future research. Finally, we summarize the findings of this study in Section <ref>.
§ OVERVIEW OF VIDEO PREDICTION MODELS
A video stream is made up of individual frames, each one representing a time slice of the scene. The goal of video prediction is to generate plausible future frames given a set of historical frames. The dynamics from the history are captured and extrapolated into the future based on the underlying patterns learned from the training data. Deep networks such as CNNs, RNNs and generative models commonly serve as the backbone for video prediction models <cit.>. In recent years, there has been a trend of replacing RNNs with transformers <cit.> in model architectures for better handling of long-term dependencies in sequential data.
Although the existing video prediction models vary in architecture, they can be divided into two categories based on the prediction type, i.e., deterministic models and stochastic models. Deterministic video prediction models generate only a single future given one set of historical inputs, while their stochastic counterparts can generate multiple possible futures from the same historical input. Deterministic video prediction models can extrapolate the frames in the immediate future with high precision, but they struggle when making long-term predictions in multi-modal natural videos, e.g., sky videos with stochastic cloud motion. To accommodate uncertainty, they tend to average out plausible future outcomes into a single blurry prediction <cit.>. This behavior is due to the pixel-wise losses used in model training, e.g., cross-entropy and mean-squared error. Nevertheless, the techniques developed in deterministic video prediction for capturing the scene dynamics can be applied in stochastic video prediction methods.
Early video prediction work focused on extending classical RNNs to more sophisticated recurrent models to deal with the long-term spatiotemporal dependencies of video sequences. Most of these methods are deterministic. For example, <cit.> incorporated convolution operation into the original LSTM module (so-called ConvLSTM) and extended the use of LSTM-based models to the image space. ConvLSTM was originally proposed for precipitation nowcasting using radar echo maps, but it has later become a building block for many benchmark video prediction models. For example, <cit.> improved the memory flows in ConvLSTM module to deal with both short and long-term dynamics in the spatiotemporal sequence data. <cit.> further decomposed the stationary and non-stationary properties in spatiotemporal dynamics to handle the higher-order non-stationarity. <cit.> proposed a two-branch deep architecture PhyDNet that disentangles the physics dynamics and the residual information (e.g., appearance, texture, details) of the video in the latent space. The physics dynamics were modeled by a physically-constrained recurrent cell called PhyCell, and the residual information was learned by ConvLSTM.
Recently, deep generative models (DGMs) have seen increasing popularity in video prediction, featured by two most commonly used and efficient approaches, namely, variational auto-encoders (VAEs) <cit.> and generative adversarial networks (GANs) <cit.>. DGMs are statistical models that aim at learning probability distributions approximating distributions of the input data. As DGMs are probabilistic models, new samples can be easily generated from the learned distributions. However, it should be noted that VAEs and GANs have their own strengths and weaknesses. VAEs, which learn probability distributions of the input data explicitly, can be used to generate diverse futures by sampling latent variables, while their predictions can be blurry as they still apply a pixel-wise MSE loss. GANs, on the other hand, can generate very realistic and sharp images via adversarial training, but without an explicit latent variable, they typically work with deterministic models or incorporate stochasticity only through input noise which has limited capability of generating diverse futures <cit.>.
<cit.> used GANs for precipitation nowcasting by simulating many samples from the conditional distribution of future radar given historical radar and generating a collection of forecasts. The stochasticity of their generations comes from injecting Gaussian noise into the input radar data. <cit.> combined the strengths of GANs and VAEs and built a Stochastic Adversarial Video Prediction (SAVP) model that managed to generate future frames with both fidelity and diversity. Similarly, a model named VideoGPT was proposed by <cit.> to address the stochasticity and realistic future image generation issues in video prediction. The model is featured by a Vector Quantized-VAE (VQ-VAE) that learns a discretized latent from the input videos and a transformer that autoregressively model the discrete latents. VideoGPT achieved promising performance on various video prediction benchmark datasets, including Moving MNIST <cit.>, BAIR Robot Pushing <cit.> and UCF-101 <cit.>, and it demonstrated the capability of generating highly diverse futures even just given one historical frame as input.
Limited studies have applied video prediction models to cloud motion prediction. Most of the existing studies are deterministic without accounting for the stochasticity of cloud motion, and very few of them have examined using the prediction results for solar forecasting. <cit.> developed a deep convolution GAN (DCGAN) architecture for future sky images forecasting and further using the predicted images for cloud coverage estimation. It showed the effectiveness of using GANs to generate realistic and sharp future sky images. However, minimal gains were observed using the sharp images generated by GANs for cloud coverage estimation compared with the blurry images generated by the same generator architecture without adversarial training. <cit.> used the modified PhyDNet architecture for extracting spatiotemporal features from historical sky image sequences to forecast sky images and solar irradiance at the same time for up to 5 minutes ahead. It was claimed to outperform strong baselines in multiple performance metrics, including mean squared error mean absolute error and structural similarity index. Ground-based sky images taken by fish-eye cameras or total sky imagers have distortions, particularly at the horizon, due to their wide field-of-view. <cit.> examined the effectiveness of spatial warping for forecasting sky images and found the wrapped sky images can facilitate longer forecasting of cloud evolution.
§ METHODOLOGY
In this study, we focus on short-term probabilistic solar forecasting, with a specific aim of generating range predictions of PV power output 15 minutes ahead into the future. We propose a two-stage deep learning-based probabilistic forecasting framework to address the existing challenges in solar forecasting, i.e., poor modeling of cloud dynamics and the lack of uncertainty quantification in PV output prediction.
The proposed framework is made up of a stochastic sky video predictor, which is capable of generating visually plausible and diversified future sky videos that captures the dynamics from the historical sky image sequence, followed by a PV output predictor, which maps the synthetic future sky images to concurrent PV output for generating range predictions of the future PV output. A schematic of the proposed forecasting system is shown in Figure <ref>. It should be noted that only the last predicted sky image frames are utilized for PV output prediction and the proposed system does not take in the historical PV output as input, preventing the model from overfitting to this signal.
§.§ Stochastic Sky Video Prediction
The sky video prediction problem is formulated as a sequence-to-sequence forecasting problem. A video prediction model is developed to predict future sky images up to 15 minutes ahead, from time t+1 to t+15 with a 2-minute sampling interval (i.e., samples generated for t+1, t+3, t+5,...). These are generated given a historical image sequence from the past 15 minutes, from t-15 to t-1 with the same sampling interval. For stochastic prediction, the model is able to generate multiple possible futures conditioned on the same historical inputs via sampling different latent sequences from the prior. More details can be found in the Model Architecture below.
Model Architecture
We name our proposed stochastic sky video prediction model SkyGPT, which is inspired by two emerging video prediction models VideoGPT <cit.> and PhyDNet <cit.>. VideoGPT is a stochastic video prediction model that is capable of generating realistic samples competitive with state-of-the-art GAN models and also shows remarkable performance in generating divergent images from the same inputs. In addition, the use of a transformer architecture enables it to effectively model long-term spatiotemporal dependencies. However, VideoGPT has not been examined for cloud motion prediction, with challenges arising from the volatility and stochasticity of clouds. PhyDNet is a deterministic RNN-based architecture that incorporates physical knowledge, represented by linear partial differential equations (PDEs), for modeling the physics dynamics in video. PhyDNet has been applied for cloud motion prediction by <cit.> and demonstrates effectiveness in capturing the general trend of cloud motion, but the generated images are quite blurry even for 5-minute-ahead forecast and without any stochasticity given the deterministic property of the model. The combination of the above two architectures could be complementary and potentially provide benefits for the 15-minute-ahead cloud motion forecast problem that we are trying to tackle in this study.
The SkyGPT follows the general structure of VideoGPT, which consists of two main parts, a vector quantized variational auto-encoder (VQ-VAE) <cit.> and an image transformer <cit.>. The VQ-VAE encompasses an encoder-decoder architecture similar to classical VAEs, but it learns a discrete latent representation of input data instead of a continuous one. The encoder part (E) consists of a series of 3D convolutions that downsample over space-time followed by attention residual blocks <cit.>, to compress high dimensional input video data (x) into latent vectors (h). The latent vectors are then discretized by performing a nearest neighbors lookup in a codebook of embeddings C = {e_i| 1≤ i≤ K}, where K is an adjustable parameter representing the size of the codebook. These embeddings are initialized randomly and can be learned during the training of the model. The decoder part (D) has a reverse architectural design as the encoder, with attention residual blocks followed by a series of 3D transposed convolutions that upsample over space-time to reconstruct the input videos from the quantized embeddings. The image transformer, as a prior network, is used to model the latent tokens in an auto-regressive fashion, where new predictions are made by feeding back the predictions from previous steps. The auto-regressive modeling is performed in the downsampled latent space rather than the raw image pixel space to avoid spatiotemporal redundancies in high dimensional imagery data <cit.>. The generated latents from the transformer are then decoded to videos of the original resolution using the decoder of the VQ-VAE. In our case, future sky image generation is conditioned on historical sky image frames. Thus, a conditional prior network is trained by first feeding the conditional frames from time t-15 to t-1 into a 3D ResNet and then performing cross-attention on the ResNet output representation during network training as <cit.> did for VideoGPT. With the learned conditional prior, diversified futures can be sampled from the latent conditioned on the historical frames during inference.
To enhance the modeling of cloud motion, we incorporate prior physical knowledge into the transformer by adapting a PDE-constrained module called PhyCell from the PhyDNet <cit.> for latent modeling. We call this entire architecture a Phy-transformer (in short of physics-informed transformer) to distinguish it from the transformer component within the architecture. Specifically, the latent modeling is disentangled into two branches here, i.e., the modeling of physics dynamics fulfilled by the PhyCell and the modeling of fine-grained details fulfilled by the transformer. The transformer works at the patch level while the PhyCell works at the image embedding level, and the predicted embeddings from the two branches are finally combined and decoded into the predicted frames of the original resolution. PhyCell implements a two-step scheme, a physical prediction with convolutions for approximating the spatial derivatives of a generic class of linear PDEs, which represent a wide range of classical physical models, e.g., the heat equation, the wave equations, the advection-diffusion equations, followed by an input assimilation as a correction of latent physical dynamics driven by observed data. An illustration of the model can be found in Figure <ref>.
Objective Functions
The VQ-VAE and Phy-transformer are trained separately. First, the VQ-VAE is trained with the following objective function <cit.>:
ℒ_VQ-VAE = ℒ_recon+ ℒ_codebook+ βℒ_commit
with
ℒ_recon = x-D(e)_2^2
ℒ_codebook = sg[E(x)]-e_2^2
ℒ_commit = sg[e]-E(x)_2^2
Where sg refers to a stop-gradient. The objective consists of a reconstruction loss ℒ_recon, a codebook loss ℒ_codebook, and a commitment loss ℒ_commit. The reconstruction loss encourages the VQ-VAE to learn good representations to accurately reconstruct data samples. The codebook loss brings codebook embeddings closer to their corresponding encoder outputs, and the commitment loss is weighted by a hyperparameter β and prevents the encoder outputs from fluctuating between different code vectors.
The objective function for training the Phy-transformer contains two parts, the moment loss ℒ_moment from the PhyDNet PhyCell, which enforces the convolution operations to approximate the spatial derivatives of linear PDEs <cit.> and a cross-entropy loss, which evaluates the ability of the transformer to predict the next patch encoding.
ℒ_Phy-transformer = ℒ_moment+ ℒ_cross-entropy
with
ℒ_moment = ∑_i ≤ k∑_j ≤ k M(w^k_p,i,j) - Δ^k_i,j_F
ℒ_cross-entropy = ∑_i=1^N∑_j=1^C y_ilogP_i,j
where k is size of the convolutional filter, w_p is a parameter of the Phycell, M(w^p_k,i,j) is the moment matrix, F stands for the Frobenius norm, Δ^k_i,j is the target moment matrix. For cross-entropy loss, N is the number of samples, and C is the number of class, y_i is the ground truth class label for the token, and P_i,j represents the predicted probability of sample i belonging to class j.
Training Details
We used a similar training setup to <cit.> for VideoGPT. Our innovation is incorporating PhyCell into the transformer and disentangling latent tokens modeling into two branches, i.e., the modeling of physics dynamics and the modeling of fine-grain details. To deal with the potential information leakage from the PhyCell to the transformer, the current transformer token encodings only get added to the corresponding PhyCell encoding of the previous time-step. Note that otherwise, the transformer would have access to the PhyCell encoding of the image which it is generating. VQ-VAE is first trained on the entire image sequences from time t-15 to t+15 with a 2-minute interval (i.e., 8 historical frames and 8 future frames). We then use the trained VQ-VAE to encode video data to latent sequences as training data for the Phy-transformer, along with the encodings of the conditional frames obtained by passing through a 3D ResNet for cross-attention during the training. Both components of SkyGPT, i.e., VQ-VAE and Phy-transformer, are implemented using the deep learning framework PyTorch 1.8.1 and trained on a GPU cluster with an Nvidia A100 (40GB memory) card.
§.§ PV Output Prediction
The PV power output prediction task is the second task in our sequential process. It learns a mapping from the sky image to concurrent PV power output. Such a mapping can be trained on historical real-world images and then applied to our generated future sky images. An analogy one can think of is the computer vision task of estimating the age of people based on their facial images <cit.>.
Model Architecture
The PV output predictor is based on U-Net <cit.>, which has an encoder-bottleneck-decoder architecture and is commonly used in various image segmentation tasks. For the PV output prediction task, a few modifications were made to the architecture of U-Net, including (1) changing the output of the original U-Net to generate a regression result instead of a segmentation map, (2) using residual block for the bottleneck part instead of the classical Convolution-BatchNorm-ReLU structure to ease the network training, (3) pruning the architecture by reducing the number of convolution layers. The architecture designs of the modified U-Net are shown in Figure <ref>. The encoder part is composed of a series of 2D convolutions to compress the high-dimension input image data into a low-dimension latent. The latents then pass through the residual blocks and get upsampled and convolved to the same resolution as the input via the decoder, which consists of a series of up-sampling followed by 2D convolutions. However, instead of reconstructing the input, a feature map with the same height and width as the input image but with only one channel is produced and regressed to a single PV output value. This feature map can be viewed as a map of PV output values associated with each pixel and the PV output prediction equals the weighted sum over them. Not only did we find this improved accuracy, but it also makes the model substantially more interpretable. Further, skip connections have been added to pass features from the encoder to the decoder in order to recover spatial information lost during downsampling.
Objective Function
The modified U-Net is trained deterministically by minimizing mean squared error (MSE) between the ground-truth PV output and the predictions:
MSE = 1/N∑_i=1^N(𝒫̂_i - 𝒫_i)^2
where N is the number of samples, 𝒫̂_i is the prediction generated by the model and 𝒫_i is the ground-truth measurement.
Training and Inference
The data used for training the modified U-Net are pairs of real sky images and concurrent PV output measurement. Adam <cit.> is used as the optimizer and a scheduled learning rate decay is applied, which follows the equation below:
lr = lr_0×γ^⌊epoch/10⌋
Where lr_0 = 2× 10^-4 is the initial learning rate to start training, γ=0.5 is a parameter that controls the rate of decay, epoch stands for the number of training epoch, ⌊ ⌋ is the floor function which returns the greatest integer less than or equal to the input argument. We trained the model with 10-fold cross-validation to avoid over-optimistic estimation of the model performance, so essentially 10 sub-models are obtained. The model was coded using the deep learning framework TensorFlow 2.4.1 and trained on a GPU cluster with an Nvidia A100 (40GB memory) card.
The model is trained deterministically. In order to generate a range prediction of PV output 15 minutes ahead during the inference phase, a collection of possible futures at time t+15 generated by the stochastic video prediction model SkyGPT are fed to the modified U-Net model. The number of futures to be generated (N_f) is a hyper-parameter, and we conducted experiments to find a balance between model performance and computational cost (see Section <ref>). Besides, each one of the sub-models from 10-fold cross-validation can generate a PV output prediction; hence, ten predictions can be obtained per each input image. For our proposed probabilistic solar forecasting framework, with the N_f generated realizations of the future sky, a total of 10× N_f predictions can be generated to form the PV output prediction range for a given forecasting time.
§.§ Baseline Solar Forecasting Framework
For comparison with the proposed framework, the baseline forecasting framework considered is the SUNSET model developed by <cit.>. The SUNSET model [SUNSET is open-sourced, and the code base can be accessed <https://github.com/YuchiSun/SUNSET> for a TensorFlow 1.X version or <https://github.com/yuhao-nie/Stanford-solar-forecasting-dataset> for a TensorFlow 2.x version.] is a CNN-based model that takes in a hybrid input of sky image sequence and concurrent PV output measurement to predict the future PV power generation. The architecture of SUNSET forecast model is shown in Figure <ref>. The SUNSET is trained using the same setup described in <cit.>. It should be noted that SUNSET is a deterministic prediction model. For consistent comparison with the proposed probabilistic forecasting framework, we use 10-fold cross-validation to train 10 sub-models so that a range prediction can be generated.
§.§ Performance Evaluation
Video Prediction The predicted image frames are evaluated both qualitatively based on human perception and quantitatively via commonly used evaluation metrics for benchmarking video prediction models, including mean squared error (MSE), mean absolute error (MAE) and VGG cosine similarity (VGG CS) <cit.>. For definitions of these metrics, here we define ℐ∈ℝ^H× W× C as the real image, ℐ̂∈ℝ^H× W× C as the predicted image, where H, W, C represents the height, width and number of channels of the images, respectively.
MSE = 1/N∑_k^N∑_i^H× W× C (ℐ_i^k-ℐ̂_i^k)^2
MAE = 1/N∑_k^N∑_i^H× W× C |ℐ_i^k-ℐ̂_i^k|
VGG CS = 1/N∑_k^N(v_k·v̂_k/||v_k||||v̂_k||)
Where v_k=VGG(ℐ_k), v_k=VGG(ℐ̂_k), VGG stands for the VGG16 network <cit.> used to extracted features from the input images. We used the VGG16 model pre-trained on ImageNet from TensorFlow Keras API. The feature vectors used here are drawn from the output of the second to last max pooling layer of the VGG. It should be noted that VGG16 network requires an input size of 224× 224, we therefore resized our input images from 64× 64 to 224× 224 using Python OpenCV library for evaluating VGG CS.
Although MSE and MAE are widely used metrics for video prediction evaluation, they are not necessarily indicative of prediction quality. For example, models can produce blurry predictions in order to minimize MSE as the loss function. These blurry predictions might not look good to human perception, but they can have good score in terms of MSE or MAE. VGG CS, to this end, is reported to align better with human perception <cit.>. We compute and compare these metrics based on samples from the validation set (see Section <ref> for details of the experimental dataset). These quantitative metrics are used for evaluating deterministic prediction performance. For stochastic video prediction models, We generated 10 possible futures for each video sample and identify the “best” sample based on VGG CS to the ground truth video for computing these metrics.
PV Output Prediction Reliability and sharpness are two important properties of probabilistic forecasts. Reliability indicates how similar the distribution of the forecast is to that of the observation and sharpness refers to the concentration of the predictive distribution. A good probabilistic prediction thus should have both reliability to cover the observation and sharpness to be informative. In this study, we evaluate the PV output prediction performance based on two probabilistic forecasting metrics — continuous ranked probability score (CRPS) <cit.> and Winkler score (WS) <cit.>. Both CRPS and WS allows for simultaneous assessment of reliability and sharpness. The CRPS measures the difference between the cumulative distribution function of observation (F_target) and model prediction (F_model) as shown in Equation <ref>, and the lower the CRPS, the better the prediction is.
CRPS = 1/N∑_k=1^N∫_-∞^+∞[F_target^k(x)-F^k_model(x)]^2dx
F_target is a form of the Heaviside step function, which jumps from 0 to 1 at the value of the measured PV output. F_model is the cumulative distribution of the PV output predictions generated by the model. The CRPS is computed as an average over a set of N predictions. An advantage of the CRPS is that it is dimensionally the same as the prediction target
(kW for the PV output in this study) and it reduces to the absolute error if the forecast is deterministic, and therefore allows for comparison between probabilistic and point forecasts <cit.>.
WS is defined in Equation <ref> with a nominal confidence level (1-α)%:
WS_k = δ if L_k≤ x_k≤ U_k
δ+2(L_k-x_k)/α if x_k<L_k
δ+2(x_k-U_k)/α if x_k>U_k
where δ=U_k-L_k with U_k and L_k representing the lower and upper bounds of the prediction interval. In this study, we use 95th percentile and 5th percentile of the predictions for U_k and L_k, respectively, resulting in a nominal confidence level of 90% (i.e., α=0.1). The WS increases when the observation (x_k) lies outside the prediction interval and a wide prediction interval will also be penalized even if it covers the observation; therefore, a lower WS represents a better probabilistic forecast. In order to assess the overall performance, the average WS is calculated over a set of N predictions:
WS = 1/N∑_k^N WS_k
The proposed model is also compared with other benchmark models. A widely established method for assessing the performance of different models is to assess their performance relative to a baseline model. The resulting indicator is called forecast skill (FS). FS essentially quantifies how much better/worse the error of the model is compared to that of a reference model:
FS = (1-Error_forecast/Error_ref)× 100%
The mostly used reference model in solar forecasting is the smart persistence model (SPM), which assumes the relative output, measured as the ratio of the actual PV output to the theoretical PV output under clear sky conditions, stays constant from time t to (t+T):
k_clr = 𝒫(t+T)/𝒫_clr(t+T) = 𝒫(t)/𝒫_clr(t)
where k_clr represents the relative output, or formally named as clear sky index, 𝒫 is the actual PV output, and 𝒫_clr is the theoretical PV output. At any given time stamp, 𝒫_clr can be estimated by a clear sky model based on sun angles and PV panel orientations <cit.>:
P_clr(t)=P_mA_e{cosϵcosχ(t) + sinϵsinχ(t)cos[ξ(t)-ζ]}
Where P_m is the maximum solar irradiance, 1000 W/m^2; A_e is the effective PV panel area, 24.98 m^2, which is obtained from a least square fit with the real panel output of 12 clear sky days (details can be found in study by <cit.>); ϵ and ζ are elevation and azimuth angles of the solar PV arrays, which are 22.5^∘ and 195^∘, respectively; χ(t) and ξ(t) are the zenith and azimuth angle of the sun, which can be estimated for any minute of the year from the empirical functions provided in the textbook by <cit.>.
Based on Equation <ref>, T-minute-ahead PV output can be estimated by SPM:
𝒫̂(t+T)=𝒫(t)/𝒫_clr(t)×𝒫_clr(t+T)
Here, the error metric we used for calculating FS is CRPS. For SPM, as it can only generate 1 prediction each time, the CRPS is reduced to mean absolute error (MAE) <cit.>:
MAE = 1/N∑_i=1^N|𝒫̂_i - 𝒫_i|
§ DATASET
Overview We leverage an in-house dataset[This study was conducted before the official release of our curated dataset SKIPP'D <cit.>, which is more organized and has a number of updates from the dataset we used here. We encourage the readers to examine the SKIPP'D dataset <https://github.com/yuhao-nie/Stanford-solar-forecasting-dataset>.] (𝒟) with 334,038 aligned pairs of sky images (ℐ) and PV power generation (𝒫) records, 𝒟 = {(ℐ_i, 𝒫_i) | i∈ℤ: 1≤ i≤ 334,038}, for the experiments in this study. The sky image frames are extracted with a 1-minute frequency from video footage recorded by a ground-based fish-eye camera (Hikvision DS-2CD6362F-IV) installed on the roof of the Green Earth Sciences Building at Stanford University. The images are down-scaled from 2048×2048 to 64×64 pixels to save model training time. The PV power generation data are collected from a 30-kW rooftop PV system ∼125 meters away from the camera, with an elevation angle of 22.5and an azimuth angle of 195[The azimuth angle is measured clockwise between the North and the PV panel orientation.]. PV data are minutely averaged and paired with the image data according to the time stamps. The collection period of the dataset is from March 2017 to December 2019, with some disruptions because of the water intrusion, wiring and/or electrical failure of the camera as well as a daylight-saving adjustment.
Data Processing As this study aims to address the challenges of solar forecasting on cloudy conditions, we use the cloudy samples from 𝒟 for model development and test. We focus on cloudy conditions because they correspond to the times when PV prediction is nontrivial – predicting clear sky PV output is a well-understood problem <cit.>. To filter out the clear sky samples, we follow the algorithm developed by <cit.>, which detects cloud pixels in sky images based on a modified normalized red blue ratio method. This screening step results in a cloudy sample subset (𝒟_cloudy) consisting of 132,305 samples from 𝒟.
For different forecasting tasks described in Section <ref>, i.e., cloud motion prediction and PV output prediction, different model input and output configurations are used, thus requiring different organization of samples. A common processing step is carried out to form an interim dataset (𝒟_interim), which is then sampled to get the dataset for the specific forecasting task. To obtain 𝒟_interim, we loop through the time stamps in 𝒟_cloudy with a step size of 2 minutes (a so-called 2-minute sampling frequency by <cit.>) to check if the future (and historical) PV output and sky image records from next (last) minute to next (past) 15 minutes are available with 1-minute resolution. The sampling frequency is chosen to be 2 minutes because a higher frequency can lead to longer model training time with limited improvement on the model accuracy <cit.>. Any sample that does not satisfy the above conditions is filtered out. After this processing step, 60,385 valid samples are obtained, 𝒟_interim={(ℐ_t-15:1:t+15^i, 𝒫_t-15:1:t+15^i) | i∈ℤ: 1≤ i≤60,385}}, with each sample containing a sequence of 31 sky images ({ℐ_t-15,ℐ_t-14,...,ℐ_t, ℐ_t+1, ..., ℐ_t+15}) and PV output measurements ({𝒫_t-15,𝒫_t-14,...,𝒫_t, 𝒫_t+1, ..., 𝒫_t+15}).
To form the dataset for the video prediction task (𝒟_vp), only the image data from 𝒟_interim is used. For each image sample, we apply a 2-minute interval to save the model training time as the video prediction task is computationally expensive. This results in 𝒟_vp={(ℐ_t-15:2:t-1^i, ℐ_t+1:2:t+15^i) | i∈ℤ: 1≤ i≤60,385}}, where the first 8 images (from t-15 to t-1) can be used as model input while the remaining 8 images (from t+1 to t+15) can serve as prediction target. For the PV output prediction task, we have two different models, our proposed model modified U-Net and baseline model SUNSET. For the U-Net model, we take both image and PV output data at time t for each sample of 𝒟_interim, forming the U-Net dataset 𝒟_unet={(ℐ_t^i, 𝒫_t^i) | i∈ℤ: 1≤ i≤60,385}} where image ℐ_t^i is served as model input and PV output 𝒫_t^i is served as model output. While for the SUNSET model, we take images and PV output records from time t-15 to time t with 1-minute resolution, and PV output at time t+15 for each sample of 𝒟_interim, forming the SUNSET dataset 𝒟_sunset={[(ℐ_t-15:1:t^i, 𝒫_t-15:1:t^i), 𝒫_t+15^i] | i∈ℤ: 1≤ i≤60,385}}. The image and PV output sequence (ℐ_t-15:1:t^i, 𝒫_t-15:1:t^i) is served as the model input and PV output 𝒫_t+15^i is served as model output.
The sky images used in this study can be segmented into two parts, namely, the area within the circle showing the view of the camera (referred to as the foreground), and the black area outside the circle (referred to as the background). Although the background looks black, the pixel values are not necessarily all 0s, but actually could be some small values. Our initial experiments have found that these pixels contain some useful information for PV output forecasting task, and we guess that they might be caused by some sort of backscattering within the camera sensor, although it needs to be further validated in the future. However, for this study, as we want our models to focus on the foreground to make predictions, we mask out the background by applying a binary mask with all 0s for pixels of the background and all 1s for pixels of the foreground to the original images pixel-wise. In the following two cases, the binary mask is applied: (1) For evaluating the performance of video prediction models, the images generated by the video prediction models are masked. Note that the mask is not applied to the images used as input of the video prediction model for training. (2) For PV output prediction model training, validation and testing, either real images or images generated by the video prediction models are masked.
Data Partitioning The processed samples are then split into training,
validation
and testing set.
We manually select 10 cloudy days from 2017 March to 2019 October based on their PV output profile for the validation set, taking into account the seasonal and annual variations. The rest of the data from the same period (i.e., 2017 March to 2019 October) goes to the training set. To test the model generalizability, we select 5 cloudy days from 2019 November to December to form the testing set, which is outside the time window of training and validation data. Samples from the validation and testing sets are not included in the model training process to prevent over-optimistic model estimation. The above split results in 53,336 (88%), 4,467(7%) and 2,582 (4%) samples, for the training, validation and testing set, respectively, for the forecast tasks. The video prediction model is only trained and validated without evaluating on the test set. The PV output prediction model uses all of the three datasets.
§ RESULTS
§.§ Sky Video Prediction
Although video prediction is not the ultimate goal of this study, understanding how well the predicted images align with the ground truth helps facilitate the analysis of the errors of the solar forecasting system. As a comparison, a few open-source benchmark video prediction models were trained based on our dataset, including ConvLSTM<cit.>, PhyDNet <cit.> and VideoGPT <cit.>. ConvLSTM and PhyDNet are deterministic prediction models, while VideoGPT is stochastic. GANs are a commonly used architecture for various generation tasks and are especially known for their power in generating very high-fidelity images. In this study, we also implemented adversarial training based on the PhyDNet architecture (referred to as PhyDNet+GAN), which is also deterministic.
The performance of the video prediction models is evaluated using the samples from the validation set, both qualitatively based on human perceptual judgments and quantitatively based on the performance metrics described in Section <ref>. Two aspects are considered for assessing the generated images, namely, realism and diversity. On the one hand, we want the predicted sky images to be as realistic as possible and close to the real sky images. On the other hand, given the uncertainty in cloud motion, we want the generated images to be reasonably diverse to cover different possible scenarios of the future sky. The generated samples by deterministic models are only evaluated for realism, while those by the stochastic models are evaluated for both realism and diversity.
Realism of the Prediction
Figure <ref> shows predictions from the proposed SkyGPT model as well as benchmark video prediction models based on two sets of historical inputs. These two examples illustrate two different cloud dynamics: (a) the sky changing from partly cloudy to overcast condition and (b) the sky changing from partly cloudy to clear sky condition. For the stochastic prediction models, i.e., VideoGPT and SkyGPT, ten future samplings were generated and two cases are shown here, the most similar generation among the ten generations to the real future images as measured by VGG CS (referred to as Best VGG sim.), and the pixel average of the ten generations (refer to as Avg. 10 samples).
For both examples, most video prediction models could capture the general trend of cloud motion correctly, except that PhyDNet fails to capture the correct dynamics for example (a). Although all model predictions show different extents of flaws, VideoGPT and our proposed model SkyGPT seem to capture the cloud dynamics better than other benchmark models. Note that the bright spot in example (a) is moving from the right bottom corner to the middle and only VideoGPT and SkyGPT captures it correctly. SkyGPT is a bit better than VideoGPT in terms of overall light and shading. For the general appearance, deterministic models ConvLSTM and PhyDNet could generate clear images for the immediate future but beyond a few frames they start to produce blurry predictions. By training the same PhyDNet architecture with a GAN framework (PhyDNet+GAN), it is able to generate the clearest images among all models even when moving into the far future. However, the texture, light, and shading of the generations is not better than VideoGPT and SkyGPT. In comparison, the predicted frames from these two stochastic models look much more clear than ConvLSTM and PhyDNet, and are competitive with the prediction generated by PhyDNet+GAN. The images look less blurry for the far future as both models use a transformer for prediction, which is proven to work well for long-term sequence modeling.
Figure <ref> shows the quantitative evaluation results of the predicted images of all video prediction models based on three metrics, i.e., MSE, MAE and VGG CS. It should be noted that for the stochastic models VideoGPT and SkyGPT, the results shown here reflect the average of the best performance over the 10 samplings (i.e., the minimum MSE and MAE, and the maximum VGG CS). Generally, we observe a trend of performance degradation of the predicted images over time, regardless of the metrics. Although the benchmark models ConVLSTM and PhyDNet generate images with more blurriness, they tend to have better performance in MSE and MAE relative to PhyDNet+GAN and SkyGPT, which generate images with higher fidelity. This is expected as ConvLSTM and PhyDNet try to minimize the MSE/MAE of the images during the training process, and produce blurry predictions as a way of accommodating the uncertainty in cloud motion. Similar results are observed by <cit.>. In terms of VGG CS, which is more aligned with human perception, images with more realism generated by models PhyDNet+GAN, VideoGPT, and SkyGPT have higher scores especially when it evolves into the far future. Note that when the timestep is greater than 5 minutes, these models can outperform ConvLSTM and PhyDNet.
Diversity of the Prediction
We first assess the diversity of the samples generated by the two stochastic video prediction models based on visual inspection. Specifically, we used the same historical frames as model input and generate ten different futures and check how different they are. Figure <ref> shows the ten samples generated by VideoGPT and the proposed model SkyGPT, respectively. It can be observed that frames generated for the immediate future could be similar, but the generations start to diverge from each other beyond a few frames. Note the similarity between different samplings for the first two future frames at t=+1 and t=+3 for SkyGPT and the diversity for the rest of the future frames. Similar findings can be observed for Sampling 5-10 generated by VideoGPT. Another point to be made here is that some cloud motion patterns might be generated more frequently than others by the models. According to the ground truth sample, the clouds likely move from the bottom left to the top right. Most of the generations from SkyGPT and VideoGPT successfully capture this dynamic although the details of the generations could be varied, e.g., the cloud coverage, the light and shading and texture of clouds. An exception is found for Sampling 5 generated by VideoGPT, which clearly has different dynamics, i.e., the clouds change moving direction after a few time steps. This is not impossible, as the wind direction could change, but this might be less likely based on the dynamics of the historical frames. In comparison, SkyGPT seems to consistently capture similar motion dynamics as the ground truth sample, probably due to the addition of the physics-constrained module PhyCell in its architecture.
The diversity of the generated samples are also evaluated quantitatively. We generated ten different futures for every sample in the validation set, computed the mean and standard deviation of the VGG CS of these ten future samplings relative to the ground truth, and averaged them over all samples in the validation set for each time step. The results are shown in Figure <ref>, where the dots stand for the mean VGG CS and the error bar represents one standard deviation. It should be highlighted here that the standard deviation can reflect how these ten future samplings differ from each other, i.e., a larger value means the ten generated futures are more diverse from each other. Both SkyGPT and VideoGPT show an increase of the standard deviation over time, indicating the future frames gradually diverge from the immediate future to the far future. This finding is consistent with the results shown in Figure <ref>. Degradation of mean VGG CS over time is also observed, which is similar to the trend observed in Figure <ref> although the method of calculating the VGG CS is slightly different. For VGG CS results shown in Figure <ref>, it is calculated based on the maximum VGG CS of ten different future generations (i.e., find the one future that is the closest to the ground truth in terms of the VGG CS) for each sample and averaged over the whole validation set.
§.§ Probabilistic Solar Forecasting
As a second step, we apply the predicted sky images from the video prediction models for a probabilistic short-term solar forecasting task which aims at forecasting the 15-min-ahead power output of a 30 kW PV system (descriptions of the experiment data can be found in Section <ref>). The range of PV output prediction at time t+15 is derived from multiple point predictions that are generated by feeding each of the predicted images at time t+15 from the video prediction models to the modified U-Net model, which is trained on the real sky images and PV output pairs (see Section <ref>). For the deterministic video prediction models, i.e., ConvLSTM, PhyDNet and PhyDNet+GAN, although only one future is generated for each input historical sky image sequence, 10 PV output predictions are generated based on the ten U-Net sub-models from 10-fold cross-validation conducted during training. For the stochastic video prediction models, i.e., VideoGPT and SkyGPT, 10 possible futures are generated for each historical sky image sequence and fed to 10 U-Net sub-models, thus, 100 predictions in total are generated.
Comparison of Different Forecasting Methods
For comparing with the performance of using the generated images from these video prediction models for PV output prediction, the following three baseline solar forecasting methods are considered: (1) the SUNSET forecast model adapted from <cit.> as described in Section <ref>, which takes in a hybrid input of sky images and PV output history from the past 15 minutes; (2) the smart persistence model (see Equations <ref> to <ref> in Section <ref>), a commonly used reference model in the solar forecasting community, which assumes the relative power output (the ratio between real power output and power output under clear sky condition) is preserved for the 15 min forecasting horizon; and (3) predictions generated by feeding the real future sky images at t+15 into the same U-Net model, which is a hypothetical case that demonstrates the forecasting system's performance if the video prediction component were 100% accurate. This value represents an upper bound on performance. It should be noted that the smart persistence model can only generate one point prediction at a given forecasting time stamp, while the SUNSET model and U-Net model fed with real future sky image can generate 10 predictions due to 10-fold cross-validation conducted during training.
The models are evaluated on the same validation and test set (see Section <ref>) and the probabilistic forecasting performance measured by CRPS, FS, and WS for all methods are presented in Table <ref>. The baseline SUNSET shows 13% and 10% FS relative to the smart persistence reference for validation and testing, respectively. Feeding predicted frames from video prediction models to the PV output prediction model U-Net, we indeed see benefits in terms of both CRPS and WS. Compared with smart persistence model, 6% to 18% and 18% to 23% FS can be achieved for the validation and the testing set, respectively. Especially for testing test, respectively, the studied two-stage forecasting methods show better generalization compared with SUNSET. Among the two-stage frameworks, the performance varies with specific video prediction models utilized for generating future images. Methods using stochastic video prediction models generally outperform those with deterministic video prediction models in terms of both CRPS and WS. It is expected as stochastic video prediction models can generate multiple possible futures, which have more chance to cover the real future sky conditions, as indicated by the significantly better WS. Our proposed method SkyGPT→U-Net consistently performs well for both validation and testing set. However, we still have a gap compared to the case where the U-Net is applied to true future images, which suggests a need for further improvement of the video prediction model.
Figure <ref> shows the prediction curves for different forecasting methods for 3 cloudy days in the test set for the time period 10:00 to 15:00. The 3 days show increasing variation of PV output as indicated by the spikes and drops of the curves. In the figure, we show two intervals of the predictions, one is 5 to 95 percentile in light blue shade (indicating we are 90% confident that the measurement will fall in this range), and another is 25 to 75 percentile in the dark blue shade (indicating we are 50% confident that the measurements will fall in this range). The baseline SUNSET model has a hard time capturing the ramp events and shows lags in predicting the spikes and drops, especially on days with highly varied PV output (see 2019-12-23). Besides, the prediction intervals of the SUNSET model tend to be consistently narrow, which is probably due to the fact that the model relies heavily on the 16 historical images and PV output records to make the predictions, and the variations between the 10 sub-models are limited. In comparison, the U-Net frameworks that directly correlate feature images with PV output generally show larger variations regardless of the video prediction models utilized, indicating that the 10 sub-models are more diverse compared with SUNSET. Deterministic video prediction coupled with the U-Net framework shows less satisfactory performance for days with high variations in PV output. The ramp events are not successfully captured by the median prediction nor covered by the prediction intervals. For the stochastic video prediction model coupled with U-Net frameworks, better coverage of the ramp events is observed. The trend of the median predictions tends to fall in the middle. Another observation is that when the PV output is less varied, the prediction intervals are narrow, while when there are lots of variations in PV output, the prediction intervals tend to be wider. This is as to be expected.
Error Analysis of the Proposed Forecasting System
The error of the proposed 2-stage forecasting framework can largely be attributed to two parts, i.e., the prediction of future sky images and the mapping from sky images to contemporaneous PV output. Figure <ref> compares the performance of different solar forecasting methods on a high level based on the CRPS of the validation and testing sets shown in Table <ref>. The percentages in the figure show the difference in normalized CRPS between different forecasting methods, indicating the improvement one can obtain going from one method to another. The normalized CRPS for each forecasting method is calculated based on the CRPS of the smart persistence model (i.e., CRPS/CRPS_sp). Generally, going from naive baseline smart persistence model to deep-learning baseline SUNSET, the error can be reduced by 10%∼13%, and an additional 5%∼13% improvement can be obtained by using the best performing 2-stage forecasting framework proposed in this study, i.e., VideoGPT→U-Net for the validation set and SkyGPT→U-Net for the test set. If the real future sky images at time t+15 are fed to the U-Net model, the performance can be further boosted by 13% to 23%, which shows the potential of improving the video prediction component of the 2-stage forecasting system. However, to further reduce the error of the forecasting system, it relies on the improvement of the PV output mapping model which causes the major error even under the assumption of a perfect video prediction. The residual error at 4, ∼2.3 kW CRPS, represents the error remaining when going from a true future sky image to PV output, and represents the error associated with taking an image at time t and producing the associated PV output at that time (the so-called “now-cast” problem).
Number of Possible Futures to be Generated
Increasing number of future sky image samples generated by the stochastic model could potentially increase the chance of the real future sky condition being covered. However, it costs more computational effort to generate more future sky images. As a reference, on Stanford's high-performance computing cluster with access to a single Nvidia A100 GPU, it takes roughly 23 hours to generate 50 different future samplings for each of the 2,582 samples in the testing set using the trained SkyGPT (i.e., on average, it takes 32 seconds to generate 50 future samplings per sample). Therefore, a balance between the performance boost and computational cost must be found. Figure <ref> shows CRPS and WS as a function of number of samples generated by stochastic video prediction models. It shows that the benefits of generating more future samples almost plateau at around 10 samples. Increasing from 1 sample to 10 samples generated, significant reductions in both CRPS and WS are found due to increased coverage of the possible futures by the video prediction models. After 10 samples were generated, the improvement is less significant as there might be more overlap between the newly generated images with the old ones. Therefore, generating 10 different future samplings from the stochastic video prediction models for downstream PV output prediction would be a good option in terms of both performance and computation.
We further validated the above findings by visualizing the distribution of the generated images at time t+15 by SkyGPT for different numbers of future samplings. The distributions of high-dimensional images can be visualized in a 2-D space by using the first two principal components of the image feature vectors. During the training of U-Net, it learns to extract features from sky images for PV output predictions; hence we can use the trained U-Net model as a feature extractor for sky images. Specifically, we took the output of the bottleneck part of the trained U-Net model and flattened it to get the feature vectors of the sky image. The sky image feature vectors are standardized to zero mean and one standard deviation and the first two principal components are obtained by principal component analysis (PCA). Figure <ref> shows the distribution of the first two principal components of the feature vectors of all 2,582 sky image samples at time t+15 in the test set based on PCA. These two components seem to correspond to the variability of cloud coverage and to the horizontal position of the sun in the sky as similarly observed in a previous study <cit.>. To illustrate this point, Figure <ref> also shows a few labeled image samples drawn from the distribution. The left column shows labeled images 1A to 5A, and the right column shows labeled images 1B to 5B. It can be observed that images 1A to 5A show variability in cloud coverage, while images 1B to 5B show differences in horizontal sun position. With that, we then drew two different image samples in the test set and visualize the distributions of corresponding generated images for different numbers of future samplings based on the first two PCA components in Figure <ref>. It shows that the distribution of the 10 future samplings could cover different possibilities fairly well, which reinforces our hypothesis of 10 as the optimal number of future samplings above. It can also be observed that the distribution of Image 1 samplings is relatively concentrated, while the distribution of Image 2 samplings is dispersed. That is mainly due to the different levels of uncertainty in cloud motion, and for the cases with high uncertainty, the SkyGPT model tends to generate more diverse futures.
§ LIMITATIONS AND FUTURE WORK
While our proposed probabilistic solar forecasting system shows promising performance for 15-minute-ahead PV output prediction, there are limitations that need to be addressed in future work to further improve the prediction reliability and sharpness. Although both video prediction model and PV output prediction model need improvement, according to the error analysis (see Figure <ref>), we suggest that the main efforts should be focused on improving the PV output prediction model, i.e., mapping sky images to contemporaneous PV outputs, or the “nowcast” problem.
For prediction reliability, it can be noticed that even we feed the real future sky images to the PV output prediction model, the prediction intervals could sometimes fail to cover the real PV output, especially in peak and valley regions of the PV output curve (see the prediction curve by "Real future SI→U-Net" on day 2019-12-23 in Figure <ref>). Here, we provide two potential research directions to ameliorate this:
* Increasing the size and diversity of the training data. In this study, we used only 53,336 samples for training the SkyGPT model for video prediction and the modified U-Net model for PV output prediction, which could be limited. Especially due to SkyGPT's transformer component, it often requires a massive amount of data for training to achieve good performance. Besides, the dataset was solely collected from Stanford campus in California, where cloudy days are normally a lot less than sunny days over the course of a year. Therefore, there are limitations in the amount as well as the diversity of cloudy samples. <cit.> recently conducted a comprehensive survey and identified 72 existing open-source sky image datasets collected globally for short-term solar forecasting and related research. It can potentially provide useful information for compiling a large-scale dataset for training the proposed probabilistic solar forecasting system.
* Examining emerging architecture for PV output prediction and tuning the hyper-parameters of the video prediction model SkyGPT. We only examined U-Net as the backbone of PV output prediction model in this study, which is essentially a CNN. With the recent advancement in deep learning, transformers has been successfully applied in various computer vision tasks and show competitive performance as previously dominant CNNs <cit.>. Vision transformer (ViT) <cit.> has great potential to be adapted to the PV output prediction task studied here. While SkyGPT is a novel combination of VideoGPT and PhyDNet for video prediction, we mostly used the default settings of these two networks for training SkyGPT. Architecture and hyper-parameter tuning of SkyGPT could be helpful in further improving its performance.
For prediction sharpness, our proposed system could sometimes generate undesired wide prediction intervals even when there is not much variance in PV output (see time periods 12:30-13:00 and 14:30-15:00 of the prediction curve by "SkyGPT→U-Net" on day 2019-11-29 in Figure <ref>). This behavior mainly results from the diverse futures generated by SkyGPT. The possible futures are sampled from the learned distribution that is an approximate to that of the training video dataset without taking into account the corresponding PV output, as the two models are trained separately. A potential way to improve the prediction sharpness is to use the signal from the PV output prediction model training to guide the stochastic image prediction, specifically, a new loss function can be designed to reward the future image generations that lead to PV output predictions within a certain range around the real PV output while penalizing the image generations that lead to PV output outside this range. In other words, the video prediction model can learn a distribution with controlled variance based on the feedback from the PV output prediction model training.
Another issue that needs attention is the gap between the generated images and the real images. As the PV output prediction model U-Net was trained based on the pairs of real sky image and PV output but never seen generated images during the training process, feeding the generated images to the PV output prediction model could cause errors during the inference phase. We touched base on this issue by applying a strategy called fine-tuning, same as that used by <cit.> for transferring the knowledge from a pre-trained solar forecasting model to a new model. Specifically, we kept the same U-Net architecture for PV output prediction, and initialized the model with the weights from the U-Net model pre-trained based on the real data and further trained it by feeding the generated images with a lower learning rate, thus adjusted new weights could be learned that incorporate information about generated images. We then applied these fine-tuned model to the test set for PV output prediction. However, the results show that the fine-tuning has little effect on the probabilistic forecasting metrics CRPS and WS, and the models without fine-tuning even perform slightly better than the ones with fine-tuning for most of the time. Methods should be developed in the future to deal with the discrepancy in input images for training and inference.
§ CONCLUSION
This work explores 15-minute-ahead probabilistic PV output prediction for a 30-kW rooftop PV system using synthetic future sky images from deep generative models. We introduced SkyGPT, a stochastic physics-informed video prediction model that is capable of generating multiple possible futures of the sky from historical sky image sequences. Extensive experiments were performed to compare SkyGPT with other benchmark video prediction models for future sky image prediction as well as using the generated images for probabilistic solar forecasting. For video prediction, the qualitative and quantitative results show that the proposed SkyGPT model can effectively capture the cloud dynamics and generate realistic yet remarkably diverse future sky images. It excels in long-term frame prediction and outperforms most of the benchmark models in terms of VGG cosine similarity beyond 5 minutes. For PV output prediction, the collection of generated future images from various video prediction models is fed to a specialized U-Net model and compared to an end-to-end deep learning baseline SUNSET as well as the smart persistence model. Coupling SkyGPT with U-Net shows better prediction reliability and sharpness for the test set than all other solar forecasting methods, achieving a continuous ranked probability score of 2.81 (13% better than SUNSET and 23% better than smart persistence) and a Winkler score of 26.70. Error analysis indicates that although video prediction plays an important role in determining the probabilistic forecasting performance, error caused by PV output inference still dominates, further suggesting that more efforts should be put on improving the PV output prediction in future work.
unsrtnat
|
http://arxiv.org/abs/2306.04865v2
|
20230608013543
|
MyStyle++: A Controllable Personalized Generative Prior
|
[
"Libing Zeng",
"Lele Chen",
"Yi Xu",
"Nima Kalantari"
] |
cs.CV
|
[
"cs.CV"
] |
Texas A&M University
USA
OPPO US Research Center, InnoPeak Technology, Inc.
USA
OPPO US Research Center, InnoPeak Technology, Inc.
USA
Texas A&M University
USA
< g r a p h i c s >
Our controllable personalized prior, trained on a collection of images of an individual, provides full control over a set of attributes, while generating results that accurately portray the facial features of that person. On the left, we demonstrate our method's ability to synthesize images of Taylor Swift with user-defined expressions and yaw angles. In the middle, we show the editing results of our method for Michelle Obama. Our approach allows the user to directly generate an edited image with a set of desired attributes. Our approach can also be used to enhance images with desired attributes, as shown on the right for image inpainting. Here, we show our inpainted results with two different expressions for an image of Scarlett Johansson.
In this paper, we propose an approach to obtain a personalized generative prior with explicit control over a set of attributes. We build upon MyStyle, a recently introduced method, that tunes the weights of a pre-trained StyleGAN face generator on a few images of an individual. This system allows synthesizing, editing, and enhancing images of the target individual with high fidelity to their facial features. However, MyStyle does not demonstrate precise control over the attributes of the generated images.
We propose to address this problem through a novel optimization system that organizes the latent space in addition to tuning the generator. Our key contribution is to formulate a loss that arranges the latent codes, corresponding to the input images, along a set of specific directions according to their attributes.
We demonstrate that our approach, dubbed MyStyle++, is able to synthesize, edit, and enhance images of an individual with great control over the attributes, while preserving the unique facial characteristics of that individual.
MyStyle++: A Controllable Personalized Generative Prior
Nima Khademi Kalantari
July 31, 2023
=======================================================
§ INTRODUCTION
Ever since the introduction of generative adversarial networks (GAN) <cit.>, there has been a growing interest in unconditional image synthesis, which has led to a rapid improvement in resolution and quality of the images generated by GAN-based approaches. In particular, StyleGAN <cit.>, one of the most popular image generators, produces high-resolution results that are indistinguishable from real images. Built on the success of StyleGAN, a large number of methods <cit.> use it as a prior for semantic face editing and other image enhancement tasks, such as inpainting and super-resolution. However, the major problem with these approaches is that they use a general prior, trained on a large number of diverse identities. Therefore, their edited or enhanced images may not preserve the identity and key facial features of the target person.
The recent approach by Nitzan et al. nitzan2022mystyle, coined MyStyle, addresses this issue by personalizing the generative prior for an individual of interest. Specifically, given a few images of a person, MyStyle first projects these images into the latent space of a pre-trained StyleGAN to obtain a set of latent vectors, called anchors. It then tunes the generator by minimizing the error between the synthesized anchor images and their corresponding input images. Through this process, the generator becomes highly tuned to reconstruct the individual of interest with high fidelity in the specific regions in the latent space, covered by the anchors. MyStyle produces impressive results, preserving the identity and facial features of the target individual, for various tasks such as synthesis, semantic editing, and image enhancement.
However, this technique does not demonstrate precise control over the attributes of the generated images. For example, to synthesize an image with a particular set of attributes, one should randomly sample the convex hull of the anchor points until a desired image is reached by chance. For image editing, MyStyle uses the editing directions provided by approaches, such as InterFaceGAN <cit.>, to offer controllability over the attributes of the generated images. Since these editing directions are learned over the entire domain, they may not reside within the personalized subspace. As shown in Fig. <ref> (top), by performing the edits using the original editing direction, the latent codes will quickly fall outside the personalized subspace, producing images with a different identity. To address this issue, MyStyle personalizes the editing direction by projecting it into the subspace. While the projected edit direction keeps the latent codes within the personalized subspace, it loses the ability to perform disentangled edits. As shown in Fig. <ref> (middle), removing the expression also results in changing the yaw angle.
Our goal is to address these problems by providing full control over a set of pre-defined attributes of the generated images. To this end, we make a key observation that anchors corresponding to a single person are usually clustered together in a small region within the latent space. Therefore, we can organize the latent space within that region by rearranging the anchors. Since it is easier for a generator, like StyleGAN, to preserve the smoothness of the output variation over the space of the latent space, rearranging the anchors causes the space in between to be dragged with them, resulting in an organized latent space.
Armed with this observation, we propose a novel optimization system to personalize a generative prior by both tuning the generator and organizing the latent space through optimizing the anchors. Our key contribution is to formulate a loss function that arranges the anchors with specific attributes along a particular direction in the latent space. Specifically, we project the anchors into a set of principal axis and minimize the variance of the projection for all the anchors with the same attribute. By doing so, the generator becomes highly tuned to one individual, while the attributes can be controlled within a small hypercube in the latent space.
We demonstrate that our proposed method, called MyStyle++, allows synthesizing images with high fidelity to the characteristics of one individual, while providing full control over a set of pre-defined attributes. We also show that our method can better disentangle different attributes compared to MyStyle <cit.>. Moreover, we demonstrate that our system can produce images with a desired attribute during image enhancement.
§ RELATED WORK
§.§ Deep Generative Networks
Generative Adversarial Networks (GANs) consist of two main modules: a generator and a discriminator <cit.>. The generator takes a noise vector as input and tries to capture the distribution of true examples. The generator focuses on producing an output that fools the discriminator, whose purpose is to classify whether the output is real or fake. GANs have been used extensively to synthesize images that are in line with the training data distribution <cit.>. Among different variants, StyleGAN <cit.>, which is a carefully re-designed generator architecture, produces the best results, particularly for human faces, that are indistinguishable from real photographs. Instead of feeding the latent code only through the input layer, StyleGAN maps the input to an intermediate latent space, which is injected at each convolution block of the generator. This architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair). In our work, we use StyleGAN2 <cit.> as the base network and personalize it by tuning the generator and organizing the latent space.
§.§ Controllable GANs
StyleGAN generates photorealistic portrait images of faces, but it lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination. Recently, many StyleGAN variants <cit.> have been introduced to address this problem. For example, StyleFlow <cit.> proposes flow models for non-linear exploration of a StyleGAN latent space. GANSpace <cit.> attempts to analyze the GAN space by identifying latent directions based on principal component analysis (PCA), applied either in latent space or feature space.
Most controllable portrait image generation methods <cit.> either rely on 3D morphable face models (3DMMs) <cit.> to achieve rig-like control over StyleGAN, or utilize another modality as guidance (, facial landmark and audio) to control the generation. For instance, by building the bijective mapping between the StyleGAN latent code and the 3DMM parameter sets, StyleRig <cit.> achieves the controllable parametric nature of existing morphable face models and the high photorealism of generative face models. Ji propose EAMM <cit.> to generate one-shot emotional talking faces controlled by an emotion source video and an audio clip.
Unfortunately, these approaches either struggle to retain crucial facial features (identity) after editing <cit.> or are unable to maintain explicit control over fully disentangled attributes.
§.§ Few-shot GANs and Personalization
Drawing inspiration from the human capability of picking up the essence of a novel object from a small number of examples and generalizing from there, many works <cit.> seek to further improve the generation quality by adapting the pre-trained model to few-shot image samples. Zakharov zakharov2019few propose a framework that performs lengthy meta-learning on a large dataset of videos. After this training, this method is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. The appearance information of the unseen target person is learned by the adaptive instance normalization layers. More recently, MyStyle <cit.> tunes the weights of a pre-trained StyleGAN face generator to form a local, low-dimensional, personalized manifold in the latent space within a small reference set of portrait images of the target person. The synthesized images within the adapted personalized latent space have better identity-preserving ability compared with the original StyleGAN. However, MyStyle does not demonstrate precise control of the attributes of the generated images. We focus on addressing this issue by organizing the personalized subspace according to a set of pre-defined attributes.
§ ALGORITHM
Given a few images of an individual with a set of corresponding attributes, our goal is to obtain a personalized generative prior that allows us to synthesize images of that individual with high fidelity and full control over the desired attributes. Specifically, we use the pre-trained StyleGAN <cit.> face generator and adapt it to the target individual through a novel optimization system. During tuning, we organize the latent space by optimizing the anchors according to the attributes to be able to easily sample an image with a desired set of attributes. Additionally, we optimize the generator to ensure it can produce images that are faithful to the characteristics of the target individual. Below we discuss our approach in detail by first explaining our data pre-processing.
§.§ Data Pre-processing
Given a set of N images of an individual, we first follow the pre-processing steps of MyStyle <cit.> to align, crop, and resize the images. We then estimate a set of M pre-defined attributes (e.g., yaw and expression) for each image. Certain attributes have a discrete domain, while others are continuous. We leave the discrete attributes unchanged, but quantize the range of the continuous ones to obtain a_m, p, where m refers to the attribute type m∈{1, ⋯, M}, while p ∈{1, ⋯, P(m)} is the index of the attribute value. Note that the number of quantization levels P(m) could be different for each attribute m. The estimated attributes for each image are then snapped to the nearest quantized values.
A simple example illustrating this process is shown in Fig. <ref>. We provide more details on the attributes and our quantization strategy in Sec. <ref>.
§.§ Controllable Personalization
We begin by projecting the input images into the latent space of StyleGAN, using the pre-trained encoder by Richardson et al. richardson2021encoding, to obtain a set of N latent codes {w_n}_n = 1^N. We follow MyStyle <cit.> terminology and call these latent codes, anchors. As discussed, in addition to tuning the generator to improve its fidelity to the target individual, we would like to organize the latent space to have full control over a set of attributes. An overview of our approach is shown in Fig. <ref>.
Our key observation is that we can organize the latent space by only rearranging the anchors. This is because the output of StyleGAN changes smoothly with respect to the input, and thus as an anchor moves, its neighborhood will be dragged with it.
Based on this observation, we formulate an anchor loss to rearrange the anchors based on their attributes.
Before explaining our anchor loss in detail, we discuss the properties of an ideal latent space: 1) Each attribute should change along a known direction; d_m for the m^th attribute. This is to ensure we can perform semantic editing and change a particular attribute by simply modifying a latent code along that attribute's direction. 2) All the latent codes that project to the same value along an attribute direction should have the same attribute. For example, all the latent codes that project to 0.5 along the yaw direction should correspond to images of front faces. This allows us to directly sample an image with a certain set of attributes by ensuring that the latent code projects to appropriate values along each attribute direction. 3) The directions for different attributes should be orthogonal to guarantee that the attributes are fully disentangled and changing one will not result in modifying the other attributes.
We propose to codify the three properties into the following anchor loss:
ℒ_anc = ∑_m = 1^M ℒ_d(d_m) = ∑_m = 1^M∑_n = 1^N ‖w_n ·d_m - c_n, m‖.
Here, w_n ·d_m computes the projection of the anchor for the n^th image onto the direction of m^th attribute through dot product. Moreover, c_n, m is the average of the projected anchors into direction d_m for all the images with the same m^th attribute as the n^th image (subset denoted as 𝒩_n, m). Formally, we can write this as follows:
c_n, m = 1/|𝒩_n, m|∑_k ∈𝒩_n, mw_k ·d_m,
where
𝒩_n, m = {k∈{1, ⋯, N} | k ≠ n, f_a(I_n)[m]= f_a(I_k)[m]}.
Here, f_a(I_n)[m] returns the quantized m^th attribute of image I_n. We note that c_n, m changes at every iteration of the optimization. By minimizing the loss in Eq. <ref>, we ensure that all the anchors with the same m^th attribute, project to the same point along m^th attribute direction d_m, satisfying our second desired property. This loss also ensures that each attribute is changed along its specific direction, satisfying the first property. This can be seen visually in Fig. <ref>; for example, if all the images with a specific yaw (each column) project to the same point in the yaw direction, moving along this direction will change the yaw.
To satisfy the third property, we apply principle component analysis (PCA) to all the N anchors and use a subset of the principle components as our d_m. We assign a specific principal component to each d_m through the following objective:
d_m = min_v_i ∈Vℒ_d(v_i)
where V is the set of all the principle components. The intuition behind this is that we would like to perform the least amount of rearrangement by ensuring that the latent space is already well aligned with respect to the selected directions. Note that we perform PCA at every iteration of training. Therefore, as we rearrange the anchor points in different iterations, the directions will be updated as well. We also note that although the objective in Eq. <ref> could potentially assign different principle components to a particular attribute direction d_m in different iterations, we did not observe this phenomenon in our experiments.
To perform personalization, we minimize the combination of the anchor and reconstruction losses
ℒ = ℒ_anc + ℒ_rec,
where the reconstruction loss ℒ_rec minimizes the error between between the synthesized G(w_n) and the corresponding input images I_n. We follow MyStyle and use a combination of LPIPS <cit.> and L2 as our reconstruction loss. During optimization, both the latent codes corresponding to the anchors and the weights of the generator are updated. Note that in addition to adapting the generator to the input image set, the reconstruction loss plays a critical role in avoiding trivial solutions to the anchor loss, e.g., collapsing all the anchors to a single point.
Once the optimization is performed, we obtain an organized latent space 𝒲^* and tuned generator G^*. All the attributes can be controlled within a M-dimensional hypercube in the organized latent space. The bounds of this hypercube can simply be found by projecting all the anchors into each axis of the hypercube d_m and computing the minimum and maximum values. Note that all the other attributes, not being used during optimization, are encoded in the remaining PCA dimensions.
§.§ Controllable Synthesis, Edit, and Enhancement
We now describe how to use our personalized generative prior for various tasks.
Synthesis: Controlling the synthesized images can easily be done by ensuring that the sampled latent code projects to the desired location in the M-dimensional hypercube. However, special care must be taken to ensure the latent code does not fall outside of the personalized space. Following MyStyle, we define the convex hull of all the organized anchors w^*_n as the personalized subspace within 𝒲^*. This convex hull is represented through generalized barycentric coordinate as the weighted sum of the anchors, where the weights (coordinates) α = {α_n}_n = 1^N sum up to 1 and are greater than -β (β is a positive value). The latter condition dilates the space by a small amount to ensure expressiveness.
We propose a simple strategy to perform controlled sampling in the dilated convex hull. Specifically, we first randomly sample α to ensure the latent code is within the personalized subspace. We then project the sampled latent code into PCA and set the projected values along the attribute directions d_m to the desired values. Note that, while it is possible for the modified latent codes to fall outside the dilated convex hull and require reprojection to the personalized space, we did not observe such cases in practice. This is mainly because our latent space is organized according to the attributes and our modifications are performed inside a hypercube which is part of the subspace.
Semantic Editing: Since our latent space is organized, the editing process for sampled images is straightforward. To edit an image, we project its latent code to PCA and perform the edit by changing the coordinate in the hypercube. To edit a real image I, we first project the image into the α space through the following objective:
α^* = min_αℒ_rec(G(W^*α), I),
where W^* is a matrix with organized anchors along its columns. Note that we follow MyStyle's approach to ensure α values satisfy the conditions of the dilated convex hull, i.e., they sum up to 1 and are greater than -β. Once we obtain the optimized latent code, following Roich et al. Roich22, we further tune the generator to better match the input image. We then perform the semantic edits, by changing the latent code in PCA.
Image Enhancement: Given an input image I with a known degradation function Q, our goal is to enhance the image, while controlling the attributes of the reconstructed image. We propose to do this through the following objective:
α^* = min_αℒ_rec(Q(G(W^*α)), I) + λ∑_m = 1^M‖ (W^*α) ·d_m - a_m ‖,
where λ controls the balance between the two terms and we set it to one in our implementation. Here, the first term ensures that the generated image, after applying the degradation function, is similar to the input image. The second term encourages the projection of the latent code W^*α onto the m^th attribute direction to be similar to the desired value a_m. Note that, we can perform enhancement by controlling a subset of the attributes, by only applying the second term to the attributes of interest. Similarly, for uncontrolled enhancement, we simply remove the second term.
§ RESULTS
We implement the proposed approach in PyTorch and adopt ADAM optimizer <cit.> with the default parameters. All the results are obtained after tuning a pre-trained StyleGAN2 <cit.> generatoron FFHQ <cit.> dataset. We perform the tuning for 3000 epochs with a batch size of one and a learning rate of 5e-3 across all datasets. We will release our source code and the network weights (for a few individuals) upon publication.
We have tested our system on the following individuals: Barack Obama (93 images), Emma Watson (304 images), Joe Biden (142 images), Leonardo DiCaprio (217 images), Michelle Obama (138 images), Oprah Winfrey (106 images), Scarlett Johansson (179 images), and Taylor Swift (129 images). We consider theexpression, as well as yaw and pitch angles as the attributes for all individuals. For Leonardo Dicaprio and Emma Watson, we include age in addition to the other three attributes. Throughout this section, we demonstrate our results on some of these individuals, but more results can be found in the supplementary materials.
We estimate the expression, yaw, and pitch by leveraging AWS Rekognition API <cit.>, while we employ the DEX VGG network <cit.> to estimate the age attribute. We quantize yaw and pitch angles by every 5 degrees and age by every 2 years during the data pre-processing stage, described in Sec. <ref>. For expression, we utilize a combination of the “Smile” and “MouthOpen” attributes of the AWS output, which indicates the presence of the attribute as true or false with a confidence level ranging from 50 to 100. We divide the confidence level by 20% and round it down to the nearest integer, resulting in three groups of presence and three groups of absence for each attribute. We then combine the lowest groups of presence and absence (presence and absence with 50% to 60% confidence) into the same group, resulting in five quantization levels for both “Smile” and “MouthOpen”. The images with the same “Smile” and “MouthOpen” quantization levels are then grouped together.
We compare our approach against two versions of MyStyle, called MyStyle_I and MyStyle_P, where the editing directions are obtained from InterFaceGAN <cit.> and PCA (using Eq. <ref>), respectively. Note that in MyStyle_P we do not organize the latent space and only tune the generator, i.e., minimize the reconstruction loss, but not the anchor loss. Although MyStyle does not demonstrate controllable synthesis, we use the approach discussed in Sec. <ref> with the directions from InterFaceGAN and PCA to imbue MyStyle with this capability.
Here, we show a subset of our results, but more comparisons and evaluations can be found in our accompanying video and supplementary materials.
Synthesis: We begin by comparing our controllable synthesis results for Oprah Winfrey, Barack Obama, Scarlett Johansson, and Leonardo DiCaprio against MyStyle_I and MyStyle_P. For each person, we show a set of results by fixing one attribute and randomly sampling the rest. As shown in Fig. <ref>, both MyStyle_I and MyStyle_P produce results with large variations in theattribute of interest, because the directions from InterFaceGAN <cit.> and PCA do not match the correct attribute directions in the personalized subspace. For example, on the top, a large smile is expected, whereas images generated by MyStyle_I and MyStyle_P exhibit a range of different expressions. While yaw is usually the dominant attribute in the latent space and relatively easy to control, MyStyle_I and MyStyle_P exhibit undesirable yaw variance for Barack Obama. Similarly, these baselines produce results with large pitch and age variations for Scarlett Johansson and Leonardo DiCaprio, respectively. In contrast, our approach produces results that are consistent in all four cases. Note that InterFaceGAN does not provide a direction corresponding to the pitch, and thus we only compare against MyStyle_P for the case with fixed pitch.
We further numerically evaluate the ability of our method to control the attributes in comparison with MyStyle_P and MyStyle_I in Table <ref>.
To accomplish this, we generate 100 images by fixing one attribute and randomly sampling the other ones.
We then estimate the attributes of the generated images, using AWS Rekognition for expression, as well as the yaw and pitch angles, and DEX VGG <cit.> for age, and compute the standard deviation of the estimated attribute for all the 100 images.
For each attribute, we show the results for five normalized values (0.0, 0.25, 0.5, 0.75, 1.0).
As seen, MyStyle_P and MyStyle_I generate inferior results as the PCA and InterFaceGAN attribute directions are not well-aligned with the correct attribute directions in the subspace.
In contrast, our approach consistently demonstrates the smallest standard deviation across all attributes for both Scarlett Johansson and Leonardo DiCaprio.
A potential concern is whether our latent space organization could compromise the diversity and preservation of the identity of the results. To numerically evaluate this, we compute the ID metric, as proposed in MyStyle <cit.>, on the results generated by both our approach and MyStyle for Scarlette Johansson, and Leonardo DiCaprio. This metric measures the cosine similarity of the features extracted by a deep classifier between the generated image and the closest one from the training data. Besides measuring the ability to preserve the identity, we also compute the diversity of the synthesized images. We follow the protocol suggested by Ojha ojha2021few to computer the intra-cluster diversity using the LPIPS score. Specifically, we generate 1000 images and assign them to one of the 10 training images, by using the lowest LPIPS distance. Then we compute the average pair-wise LPIPS distance within members of the same cluster and then average over the 10 clusters. As shown in Table <ref>, our method generates results that are comparable to MyStyle in terms of ID metric and diversity score, demonstrating that our latent space organization does not compromise the diversity and identity preservation of the results.
Semantic Editing: We begin by comparing our semantic editing results against MyStyle_P and MyStyle_I in Fig. <ref>. Specifically, we modify the expression, yaw, pitch, and age of Scarlett Johansson, Michelle Obama, Joe Biden, and Leonardo DiCaprio, respectively. MyStyle_P has difficulties editing Scarlett Johansson's expression and predominantly changes the yaw. While MyStyle_I is better able to edit the expression, it slightly changes the yaw (see the supplementary video) and produces a neutral face with altered identity (the leftmost image). Moreover, both MyStyle_P and MyStyle_I change the expression when editing Michelle Obama's yaw angle. For Joe Biden, MyStyle_P struggles to properly edit the pitch angle as the PCA direction is not well-aligned with the pitch attribute direction in the subspace. Finally, when editing the age of Leonardo DiCaprio, both MyStyle_P and MyStyle_I exhibit noticeable changes to the expression and pitch, respectively. Additionally, both approaches struggle to preserve the identity of the edited images in extreme cases (rightmost for MyStyle_P and leftmost for MyStyle_I). In contrast to these techniques, our method only changes the attribute of interest when producing edited results and is able to better preserve the identity. Again, we note that we do not show pitch editing for MyStyle_I as InterFaceGAN does not provide a direction corresponding to the pitch attribute.
Next, we compare our method against the other techniques for editing real images of Barack Obama, Emma Watson, Scarlett Johansson, and Leonardo DiCaprio, in Fig. <ref>.
Both MyStyle_P and MyStyle_I have difficulties preserving the identity of Barack Obama when removing the smile. Additionally, MyStyle_P struggles to maintain the yaw angle.
For Emma Watson, both MyStyle_P and MyStyle_I change the expression when editing the yaw angle. For Scarlett Johansson, MyStyle_P is unable to edit the pitch and instead modifies the yaw angle. Finally, MyStyle_P changes the yaw angle when editing Leonardo DiCaprio's age, while MyStyle_I has difficulties maintaining the identity.
In contrast to these methods, our approach disentangles the attributes more effectively and is better at preserving the identities in all four cases.
We note that the reason behind MyStyle's occasional failure to preserve the identity is that the edited latent codes, in some cases, fall outside the personalized subspace. While the loss of identity can be resolved by projecting the edited latent codes back to the convex hull, using MyStyle's suggested strategy, this process produces results with undesirable attributes. This is shown in Fig. <ref> where the objective is to completely remove Barack Obama's smile and produce a teenage Leonardo DiCaprio. MyStyle_I produces results with altered identities as evident both visually and numerically through the ID metric. The identity is improved by projecting the edited latent codes to the subspace (third column), but this process increases the smile (top) and age (bottom).
We further numerically compare our real image editing results against MyStyle_P and MyStyle_I on Leonardo DiCaprio and Michelle Obama in Tab. <ref>. Specifically, we evaluate the editing consistency by computing the mean standard deviation of the edited attribute, while we measure the attribute disentanglement by calculating the mean standard deviation of the non-edited attributes. The standard deviation is computed over 21 edits and they are averaged over 15 and 21 images for Michelle Obama and Leonardo DiCaprio, respectively. We additionally evaluate the ability of different methods to preserve the identity using the ID metric. As seen, our method consistently outperforms MyStyle_P and MyStyle_I across all metrics.
Image Enhancement: As discussed in Sec. <ref>, since our method provides precise control over the attributes, it can be used to perform controllable image enhancement. This is shown in Figs. <ref> and <ref> for image inpainting and super-resolution, respectively. As seen our method can produce inpainted and super-resolved images with the desired expressions.
§ LIMITATIONS AND FUTURE WORK
Our approach is able to produce high-quality results with great control over a set of attributes. However, it has a few limitations. First, the number of images required for personalization increases significantly with the number of desired attributes. This is because we rely on the propagation of the anchors to the neighboring regions. If the anchors in certain regions are sparse, those areas are not going to be personalized appropriately. However, this is not unique to our approach and MyStyle suffers from the same drawback. For example, if MyStyle is personalized with images of a young subject, it cannot produce images of the subject at an old age with high fidelity. Second, while our approach provides great control over the attributes, our reconstructions for attributes like view are not physically accurate. In the future, it would be interesting to incorporate the image formation process into our system to improve accuracy.
We note that although our approach has the potential to be applied to cases beyond MyStyle, such as organizing the entire latent space of StyleGAN, one significant challenge arises: organizing the entire latent space necessitates a large number of anchor images, resulting in time-consuming and difficult optimization. Furthermore, special attention must be given to prevent anchors with different identities from being placed closely together after optimization; this is not an issue when handling a single individual.
§ CONCLUSION
We have presented an approach to obtain a controllable personalized generative prior from a set of images of an individual. Our system allows for reconstructing images of the individual that faithfully preserve the key facial features of the individual, while providing full control over a set of pre-defined attributes. In addition to tuning a pre-trained generator, we organize its latent space such that different attributes change along certain known directions. To do this, we formulate a loss that rearranges the latent codes, corresponding to the input images, according to the attributes. We show that our method better disentangles the attributes than MyStyle, while providing full control over the attributes.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.03018v1
|
20230605163501
|
Quantification of Uncertainties in Deep Learning-based Environment Perception
|
[
"Marco Braun",
"Moritz Luszek",
"Jan Siegemund",
"Kevin Kollek",
"Anton Kummert"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
@IEEEtitlepagestyle
oddfoot
evenfoot
Copyright 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/COINS51742.2021.9524106
Quantification of Uncertainties in Deep Learning - based Environment Perception
1st Marco Braun
University of Wuppertal
Wuppertal, Germany
[email protected]
2nd Moritz Luszek
Aptiv
Wuppertal, Germany
[email protected]
3rd Jan Siegemund
Aptiv
Wuppertal, Germany
[email protected]
4th Kevin Kollek
University of Wuppertal
Wuppertal, Germany
[email protected]
5th Anton Kummert
University of Wuppertal
Wuppertal, Germany
[email protected]
July 31, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================
In this work, we introduce a novel Deep Learning-based method to perceive the environment of a vehicle based on radar scans while accounting for uncertainties in its predictions. The environment of the host vehicle is segmented into equally sized grid cells which are classified individually. Complementary to the segmentation output, our Deep Learning-based algorithm is capable of differentiating uncertainties in its predictions as being related to an inadequate model (epistemic uncertainty) or noisy data (aleatoric uncertainty). To this end, weights are described as probability distributions accounting for uncertainties in the model parameters. Distributions are learned in a supervised fashion using gradient descent. We prove that uncertainties in the model output correlate with the precision of its predictions. Compared to previous concepts, we show superior performance of our approach to reliably perceive the environment of a vehicle.
Deep Learning, Environment Perception, Uncertainty Estimation
§ INTRODUCTION
In recent years, the development of advanced driver assistance systems up to fully autonomous driving has been driven forward extensively. To implement these systems, sensors such as cameras, radar or lidar are utilized to perceive the surroundings of a vehicle. While camera and lidar sensors tend to fail in challenging weather conditions like fog or heavy rain, radar sensors show superior reliability in these situations while being less expensive. Moreover, reflections from radar sensors contain valuable intrinsic properties such as the relative velocity of the detected object, also known as Doppler velocity v_R, and the reflection intensity (RCS). In a partially static and dynamic environment, radar returns are therefore particularly suitable to derive scene information. This perception can then be used in a variety of driving assistance functions such as collision avoidance or ego trajectory calculation in automated driving.
Inverse Sensor Models (ISM) <cit.> have been extensively used to segment the surroundings of a vehicle into drivable and occupied areas based on radar signals. By defining occupancy grid maps (OGM) as a semantic segmentation task, latest approaches such as <cit.> outperformed traditional ISM based methods. These algorithms are trained by utilizing deep learning (DL) techniques to predict whether a cell is occupied or free based on radar returns.
While calculations by ISMs, however, are comprehensible due to their explicitly formulated model and therefore trustworthy, the internal processing of data in a neural network can hardly be interpreted. In most cases, the user has no choice but to blindly trust the network's predictions, with no knowledge about internal reasoning and decision-making. The safety-critical nature of driver assistance systems, however, requires a reliable prediction for perceiving the area around the vehicle.To address these safety concerns, the system must be able to quantify the uncertainties associated with a prediction from a neural network as depicted in Figure <ref>. Therefore, measuring two types of uncertainty is desired: Aleatoric uncertainty accounts for noise and ambiguities in the data itself. When processing radar reflections, this uncertainty can be caused by spatial inaccuracies of a reflection or a high variance in radar-specific properties that are characteristic for certain classes. Epistemic uncertainty represents the uncertainty in the model parameters. It captures how close the characteristics of a given situation match the data on which a network was trained on. By quantifying this uncertainty, the system is able to identify unfamiliar environmental conditions that would otherwise lead to an overconfident prediction.
While aleatoric uncertainty can be captured by mapping the network outputs to a categorical distribution <cit.>, estimating the uncertainty in the model parameters themselves requires modifications to the network structure. Epistemic uncertainty in a model can be captured by placing distributions on its weights that theoretically represent all plausible model parameters, given the data <cit.><cit.>. Approaches like <cit.> <cit.> present Bayesian Neural Network (BNN) architectures that quantify epistemic uncertainties for the semantic segmentation of images by utilizing Monte Carlo Dropout <cit.>. We build up on these methods to reliably perceive the environment of a vehicle based on radar scans.
In the first part of this work, we show how Gaussian distributions on the weights can be used to capture model uncertainty and how aleatoric, data-related, uncertainty is derived from the predictions of the same network. Building up on that, we present a model to segment the environment of a vehicle by processing radar data while accounting for uncertainties in its predictions. By parameterizing each weight in this model by a Gaussian distribution, the network doubles in size in terms of trainable parameters. To counteract this, we present a more efficient hybrid deterministic, probabilistic network architecture. In the last section, we analyze quantitatively and qualitatively the ability of our approach to perceive the environment while taking into account epistemic and aleatory uncertainties. Finally, we compare our approach to state-of-the-art implementations based on MC Dropout which we adapt for the task of environment perception.
§ RELATED WORK
§.§ Uncertainties in Semantic Segmentation
In DL-systems, we want our neural network to learn how to map an input x ∈ X on a target y ∈ Y. For semantic segmentation of grid cells, the input data shape X corresponds to a regular grid structure of c_l x c_w cells, each containing F_in input features. We then apply a network f_θ(x) with weights w that are parameterized by θ to predict C class probabilities for each cell individually. In order to optimize an ordinary neural network model to perform semantic segmentation, the maximum-likelihood estimate (MLE) is captured by maximizing the log-likelihood
w_MLE = argmax_wlog p(Y |X, w)
of the class probabilities for each cell given the training data X_train, Y_train. These networks, however, are not feasible to account for uncertainties in their parameters.
To capture epistemic uncertainty, BNN <cit.><cit.> architectures can be deployed. These networks are implemented by placing prior probability distributions p(w) on the model weights so that the maximum a posteriori (MAP) objective
w_MAP = argmax_wlog p(Y |X, w) + log p(w)
can be applied to find the optimal distributions on the weights w given the training data X_train, Y_train <cit.>. As a result, the posterior probability distribution p(w|X, Y) indicates an infinite set of all possible model parameters of the network given the training data.
In real world scenarios, the exact posterior distribution, however, is not tractable and not parameterizable. Therefore, different approaches were developed to approximate p(w | X, Y).
By placing Bernoulli distributions on the weights of a neural network, <cit.> presents a BNN that is used in a broad variety of approaches to capture epistemic uncertainty in the semantic segmentation of images. For Bayesian SegNets <cit.>, the authors slightly modify the general idea of BNNs by placing dropout layers between layers of their network that are activated during testing. This approach of capturing the approximate posterior distribution is called Monte Carlo Dropout (MC Dropout). Similarly, the authors of <cit.> present a modified DenseNet <cit.> for semantic segmentation of images while capturing epistemic and aleatoric uncertainties. Comparable to Bayesian SegNet, epistemic uncertainty is obtained by utilizing dropout layers. Furthermore, the authors of <cit.> capture heteroscedastic aleatoric uncertainty - aleatoric uncertainty dependent on the network inputs - by implementing a loss attenuation that is learned in an unsupervised manner. The resulting attenuation factor is then interpreted as the aleatoric uncertainty predicted by the network.
§.§ Deep Learning on Environment Perception
Returns from radar sensors are broadly used for the calculation of occupancy grid maps due to their long range sensing and superior robustness in challenging weather conditions. Traditionally, grid maps are deducted by using a combination of ISMs and Bayesian filtering <cit.> to calculate the probability for a grid cell being free or occupied based on sensor returns <cit.><cit.>. Lately, machine learning approaches were developed to perform occupancy grid mapping as a data-driven task. In <cit.>, the authors formulate ISMs as a three class semantic segmentation task by applying a neural network to predict for each grid cell whether it is occupied, free or unobserved. Ground truth (GT) is generated from lidar ray tracing so that the system can be trained in a self-supervised manner. This approach outperforms traditional grid mapping techniques.
The authors of <cit.> expand the idea of applying a DL-based systems to learn ISMs by modeling heteroscedastic aleatoric uncertainty that, based on the input radar scan, should indicate whether the system assumes a cell to be occluded. This is achieved by treating each output as a normally distributed latent variable z. This variable is parameterized by a standard deviation factor γ_ϕ which accounts for the uncertainty of a cell being observable and a mean value μ_ϕ indicating the predicted probability of a cell being occupied.
In order to produce reliable grid maps based on a neural network that processes sensor scans, capturing the uncertainty that arises from occlusion is not sufficient. Instead, the network's occasional ignorance about how to reason on the environment due to insufficient training data or noise related to sensor-specific properties needs to be quantified. Additionally, the two presented approaches <cit.> and <cit.> show that lidar sensors are broadly utilized to gather data about the environment of a vehicle due to their high measurement accuracy. The transition from knowledge gained from the lidar sensor to a system that consumes radar data, however, poses additional challenges related to different sensor characteristics which further reduces the confidence in the network predictions. In the following sections, we therefore define a model for environment perception that is able to reason about the reliability of its outputs by capturing epistemic and aleatoric uncertainties.
§ METHOD
§.§ Measuring Uncertainties
As stated above, we want to capture both epistemic and aleatoric uncertainties from our network outputs. Differentiating between these two kinds is desired as it opens up opportunities to better interpret each network prediction so that strategies can be deduced on how to handle each situation. In order to measure both epistemic and aleatoric uncertainty, we first calculate the predictive uncertainty H_p which can then be decomposed. In classification problems this predictive uncertainty can be captured by calculating the entropy of a network output <cit.>. We calculate this predictive entropy as
ℍ_p= - ∑^C p(y^*|x^*, X, Y) log p(y^*|x^*, X, Y)
for C classes. This term can be interpreted as a composition of aleatoric and epistemic uncertainties<cit.>.
We can deduce the former by calculating the entropy while assuming a fixed value for the weights of the network
ℍ_a= E_q(w | θ) [ℍ(y^*|x^*, w)]
since the resulting uncertainty ℍ_a arises from the input data rather than the weights <cit.>.
The epistemic uncertainty can then be obtained as the difference between the predictive and the aleatoric uncertainty
ℍ_e = ℍ_p - ℍ_a
since it results from uncertainties in the weight parameters that are not covered by ℍ_a.
§.§ Epistemic Uncertainty in environment perception
As mentioned in Section <ref>, epistemic uncertainty is modeled by capturing the posterior distribution p(w | X, Y) on the weights given the data. This posterior distribution, however, is intractable as it indicates the probability of each weight in a neural network to take any possible value. We therefore approximate p(w | X, Y) by a parameterizable Gaussian distributions q(w | θ) with parameters θ = [μ, σ] (Figure <ref>). These parameters can then be learned by backpropagation using variational inference
<cit.><cit.><cit.>.
During training, we therefore want to minimize the Kullback-Leibler (KL) divergence between the actual posterior p(w | X, Y) and its parameterized approximation q(w | θ):
θ^* = arg min_θKL[q(w | θ)||p(w | X, Y)]
While this objective function still contains the intractable posterior distribution p(w | X, Y), it can be transformed into the Evidence Lower Bound function<cit.><cit.> which is used to define the loss function of our training
L(θ, X, Y) = KL[q(w | θ)||p(w)] - E_q(w|θ)[log p(Y |X, w)]
The latter term of equation <ref> represents the data dependent log-likelihood loss. This term indicates how well the network is able to map the input data on the GT. The former part of equation <ref> depends on the prior probability distribution p(w) = 𝒩(w|0, γI). As default, we set γ=1. This prior dependent loss increases the variance of distributions on weights that are rarely optimized during training, indicating a high uncertainty in those weight parameters. We then optimize our network by backpropagation utilizing the reparameterization trick <cit.>.
For semantic segmentation, the categorical distribution p(y^*|x^*, X, Y) for each output cell can then in theory be obtained by marginalizing over the learned distributions on the weights
p(y^*|x^*, X, Y) = ∫ p(y^*|x^*, w) q(w | θ) dw
for y^* defining the output of the model f_θ(x^*) based on input data x^*.
In practice, we apply Monte Carlo sampling to approximate this integral from equation <ref> by drawing N point estimates w_n-0.9ex q(w | θ) for each weight from the approximated posterior distributions:
p(y^*|x^*, X, Y) ≈1N∑_n=1^N p(y^*|x^*, w_n)
By utilizing this equation, we are able to calculate the predictive entropy from equation <ref> as
ℍ_p≈ - ∑^C(1N∑_n=1^N p(y^*|x^*, w_n))
log (1N∑_n=1^N p(y^*|x^*, w_n))
.
Furthermore, the approximation in equation <ref> can now also be used to determine the aleatory uncertainty from equation <ref> as
H_a≈ - 1N∑_n=1^N[∑^C p(y^*|x^*, w_n) log p(y^*|x^*, w_n)]
.
§.§ Network Design
The model we deploy is depicted in Figure <ref>.
After applying batch normalization on the input data, we first utilize Atrous Spatial Pyramid Pooling (ASPP)-Layers introduced by <cit.> to subsequently extract descriptive patterns from the input data: Input features F_in containing information about radar reflections (I in Figure <ref>) are processed to obtain patterns in spatially local correlations of neighboring cells. By deploying a variety of 2D convolutions with different dilation rates in any ASPP layer, feature maps are processed in parallel on different scales to realize a variety of field of views. This allows the network to reliably extract information about objects of different sizes that may be present in numerous neighboring cells. The features that are extracted by parallel convolutions in an ASPP layer are then cell-wise concatenated while maintaining c_l x c_w grid cells. We increase the capacity of the network to recognize patterns in the data by subsequently applying these layers as described in Figure <ref>.
The c_l x c_w high level, complex features as a result of the stacked ASPP layers are then processed to cell-wise output probability scores for each of the C considered classes. For this purpose, we use one 2D-Convolutional layer with C outputs logits per cell and apply a Softmax-function to obtain a C-dimensional categorical distribution.
As mentioned in section <ref>, we further modify the network in order to capture epistemic uncertainty. Based on the structure presented in Figure <ref> we implemented three different modifications that will be evaluated and compared in section <ref>: First, we place parameterized Gaussian distributions as described in Figure <ref> on all weights of our network. We then introduce a hybrid structure containing deterministic weights in between I and II and Gaussian distributions on the weights in between II and III of Figure <ref>. In a third configuration, we place a dropout layer on II that remains active during testing to provide a MC dropout implementation of our approach.
§ EXPERIMENTS
In this section, we describe the setup and results of experiments we conducted to verify the ability of our network to reliably perceive the environment while capturing aleatoric and epistemic uncertainties. For this purpose, we first describe how we preprocess input data and GT. To evaluate the effects of the network adaptations that we described in Chapter <ref> on the performance of the system, we first train the model from Figure <ref> in a purely deterministic configuration. We then analyze, both qualitatively and quantitatively, the extent to which quantifying aleatory and epistemic uncertainty allows for more accurate and reliably prediction. We further evaluate various network layouts to capture model uncertainties: A purely probabilistic model in which all weights are replaced by parameterized Gaussian distributions, a hybrid approach of probabilistic and deterministic weights and networks which apply MC dropout as described in current state-of-the-art approaches.
§.§ Data Preprocessing
To train and evaluate the performance of our approach, we use recordings from a vehicle equipped with radar and lidar sensors. From the radar sensors we obtain point clouds, i.e. detections in two dimensions containing properties that we use as input data to our network.
We pre-process the input data by projecting a grid of c_l x c_w cells on the area around the vehicle with the ego vehicle located in the center. In order to generate denser input scans, we concatenate a fixed amount of subsequent point clouds and compensate for motion of the ego vehicle between successive recordings in time. We then assign each detection to its respective cell based on its coordinates P = (p_l, p_w). Reflections outside the projected grid frame are disregarded. We then project properties from detections in the point cloud to the associated grid cells. Among others, these features include the amount as well as the average Doppler values, average RCS values and the relative time of recording of all detections within a grid cell. Additionally, each feature is normalized individually so that the ranges of all input values are within [0, 1].
We use annotated lidar point clouds to create cell-wise labels. For lidar reflections resulting from static surroundings, we concatenate all scans within a scene to receive a denser representation of the environment. In order to transfer annotations from three-dimensional lidar point clouds to two-dimensional grid cells, we project the reflections on the x-y plane. A cell is assigned to the class that is most contained in the projected point clouds. Cells that do not contain a single projected lidar reflection are labeled as unknown.
To focus on the cells that are observable by the radar sensor during training, we apply a cell-wise loss weighting. To this end, we simulate rays between the position of each radar sensor and corresponding lidar reflections. We then derive observability weights by calculating the density of simulated rays per cell relative to the amount of rays that could be present in a cell if they were not blocked by obstacles. As a result, we weight the loss values for cells with high observability with values close to 1, while occluded cells only marginally affect the training due to their loss weighting close to 0.
To test the ability of our network to generalize on unseen data, we split our data into a training (21776 scenes) and a test set (9294 scenes).
§.§ Setting
We trained our models utilizing Adam optimizer with a learning rate of 5 x 10^-4 and a batch size of four. For networks that are equipped with parameterized Gaussian distributions on the weights we applied the loss function defined in equation <ref>, other networks are trained by using the Cross Entropy Loss.
We compare the performance of our network on the semantic segmentation task by calculating the Intersection over Union (IoU) for each of the classes free, occupied, moving object and unknown and then average over all classes to obtain the mean IoU value (mIoU). Since we only want to consider cells for the evaluation that can actually be observed by the radar sensor, we only calculate the IoU including cells with an observability weight greater than zero. We trained each network for 30 epochs. To generate representative results, we repeated the training five times for each approach and use the mean value over all five training results respectively for evaluation. No data augmentation was applied.
§.§ Network Variants
As a baseline, we first train our model from Figure <ref> in a pure deterministic implementation, i.e. without utilizing probability distributions on the weights. Per-class mIoU metrics can be obtained from Table <ref>.
To capture epistemic uncertainty, we then replace each weight in the deterministic network with Gaussian distributions as described in section <ref>. As a result, we receive a network with distributions q(w | θ) on the weights that are parameterized by a mean μ and standard deviation σ value, each. By training this network with the loss function defined in equation <ref>, we approximate the posterior probability distribution p(w | X, Y). As shown in Table <ref>, this implementation leads to a slightly increased performance of the network measured by mIoU. We attribute this increased performance to the regulatory effect of the probabilistic weights that is caused by the KL divergence loss between the prior p(w) and the approximated posterior q(w | θ).
The implementation of a fully probabilistic network, however, comes at the expense of almost doubling the amount of network parameters without significantly increasing its capacity. The resulting cost increase in terms of computing power and memory is undesired in most domains radar systems are applied. We therefore introduce a hybrid network version that consists predominantly of deterministic network weights and uses parameterized distributions on the weights more deliberately to determine model uncertainty. This model extracts highly descriptive patterns from input data in a deterministic manner (i.e. all weights between I and II in Figure <ref>). The resulting features in II are then mapped onto the output cells by utilizing Gaussian distributions for all weights between II and III that are able to capture epistemic uncertainties in the convolution operations. We believe that the model uncertainty of the front part can be compensated and reflected to a large extent by restricting the network to use parameterized distributions in the last layers of the network. Thus, parameters μ and σ that were used to parameterize the Gaussian distributions to capture model uncertainties in the henceforth deterministic layers can be reduced to a single parameter per weight again. As stated in Table <ref>, a comparable performance of this hybrid network version is achieved by only marginally increasing the amount of parameters compared to the purely deterministic baseline. This hybrid network can then be used similarly to the fully probabilistic approach to capture epistemic and aleatoric uncertainties as described in section <ref>.
§.§ Qualitative Results
Visual representations of the results from the hybrid deterministic, probabilistic network as described in the previous section can be obtained from Figure <ref>. The first scene, depicted in the first row, motivates capturing of uncertainties in a network prediction to avoid missing out various objects on areas that are predicted as drivable, i.e. indicating possible false positive free cells. In Figure <ref> two bollards can be observed on the left side of the vehicle that are not present in the GT as shown in Figure <ref>. Since the GT is derived from concatenated lidar frames, missing out small objects can be caused by a low scan occupancy of lidar reflections on these objects due to their size. Accordingly, the plain semantic segmentation output which is depicted in the third column of the same figure shows that the network will learn to interpret the radar reflections resulting from these bollards as free areas. This can potentially lead to fatal crashes in case an autonomous system operates on this environment perception. This hazard can be resolved by estimating uncertainties in the network prediction as depicted in the last two columns of Figure <ref>.
In particular, we observe an increased epistemic uncertainty in the prediction of free cells in the vicinity of the bollards. This shows that although the network predicts the desired class according to the GT, it is able to express its lack of confidence in processing input data that reveals unusual properties for a free area. An increased epistemic uncertainty indicates that the network was not trained to predict free cells based on patterns that are similar to those of the detected bollards. Aleatoric uncertainty is less pronounced in this situation, which underscores our assumption that the uncertainty related to these unrecognized bollards arises from a lack of knowledge in the network parameters rather than noise in the data.
In each scene from Figure <ref>, our approach particularly determines an increased epistemic uncertainty for cells that cannot be observed by the radar sensor, for example at the edges of the scene. This effect can be attributed to the cell-wise loss weighting, which, as described in Section <ref>, depends on the observability value of a cell.
Furthermore, all three scenes show that aleatoric and epistemic uncertainties occasionally occur jointly. From this observation we can derive that one uncertainty compensates the other to a certain degree. This property was, to our best knowledge, first mentioned in <cit.>.
§.§ Quantitative Results
Besides presenting a visual representation from Figure <ref> of the network outputs for the hybrid network structure, we evaluate the correlation between certainties of a prediction and the likelihood for a cell to be predicted correctly in the plots of Figure <ref>. For these diagrams we treat each cell prediction of the test set, composed of class probabilities together with epistemic or aleatoric uncertainty, individually. Again, we only consider cells that are visible as stated in section <ref>. We then define ten quantiles for both kinds of uncertainty, respectively. By calculating the precision of all predictions for each class that are above a certain quantile, we can deduce whether a high cell certainty corresponds to a high probability that a cell will be correctly predicted. Precision for each class is defined as the amount of correct classified predictions divided by the sum of correct classified predictions and false classified predictions. By capturing this correlation, Figure <ref> shows strict monotonically increasing graphs for both epistemic and aleatoric uncertainty on a class basis.
Based on the results from Figure <ref> we assume that our approach is able to successfully identify predictions on a cell level that are likely to be incorrect based on quantified uncertainties. Since this conclusion holds for both epistemic and aleatory uncertainty, it proves that our approach is reliably able to predict both uncertainty due to an inadequate model and noisy data.
§.§ Comparison with MC Dropout methods
As stated in section <ref>, most previous approaches like Bayesian Segnet <cit.> build up on MC Dropout methods <cit.>. These approaches capture epistemic uncertainty by implementing dropout layers at certain positions within the neural network that are activated during application of the algorithm. Therefore, subsequent weights are treated as Bernoulli distributions with a fixed probability. Since the probability of switching off a weight is not learned by backpropagation, these MC dropout approaches represent a simplification of BNNs as defined in section <ref>. Previous publications like <cit.><cit.><cit.>, however, show promising results based on MC Dropout methods to estimate epistemic uncertainty in a network. Bayesian Segnet <cit.> presents a central contribution to this topic. The authors equip Segnet <cit.> with dropout layers in various positions within the network and sample through the network during inference to extract model uncertainty for semantic segementation tasks.
To relate our results to this approach, we implement Segnet as described in <cit.> in a shallow version to fit the model capacity to our baseline network from Figure <ref> in terms of trainable parameters. We apply the configuration of Bayesian Segnet that performed best as stated in <cit.> by placing a dropout layer in between encoder and decoder and then apply it for environment perception based on radar data. Results on our test set are depicted in Table <ref>. When compared to our network architecture Gaussian Weights, we obtain a decrease in mIoU of -0.9ex 3% for the Bayesian Segnet.
This difference in network performance, however, can also be related to differences in the model structures between Segnet and our implementation. For a more realistic comparison between MC Dropout and the parameterized Gaussian distributions to capture model uncertainties for the environmental perception task, we transfer our network structure described in Figure <ref> to an MC Dropout implementation. This is achieved by placing a MC Dropout layer in II of Figure <ref>. While Table <ref> shows that this implementation leads to an increased performance compared to Bayesian Segnet, learning parameterized Gaussian distributions on the weights of the network still shows a superior performance compared to MC Dropout of 2% in mIoU.
Besides increasing the performance on the semantic segmentation task, applying Gaussian distributions on the weights of the network to capture model uncertainty offers a variety of further advances compared to MC Dropout methods. First, we are able to incorporate external knowledge about class-related epistemic uncertainties into the training by adjusting prior distributions p(w) of equation <ref>. Furthermore, learning the standard deviation parameters σ for each weight individually enables the network to learn fine-grained model uncertainties. Figure <ref> depicts how our network utilizes the opportunity to learn a variety of uncertainties σ depending on its individual confidence in the weighting parameter based on the data. Weights that are frequently used and consistently optimized throughout the training are more likely to be parameterized by a low σ. In order to sufficiently account for epistemic uncertainty we therefore conclude that it is advantageous for the network to independently learn uncertainty on its weights in contrast to fixed Bernoulli distributions as applied in MC Dropout methods.
Furthermore, the approach of learned Gaussian distributions on the weights provides increased comprehensibility regarding relevance and reliability of individual network parameters. Weights that are parameterized by a high standard deviation σ are thus more likely to be interpreted as less reliable. Approaches such as active learning <cit.> or pruning <cit.> can leverage these insights to increase network performance and make networks more efficient.
§ SUMMARY
In this work, we implemented and evaluated a neural network architecture to perform environment perception based on radar data as a semantic segmentation task. We further defined weights in the network by parameterized Gaussian distributions that are able to capture model uncertainty to increase reliability and accuracy of network predictions for environment perception. We furthermore differentiate between model uncertainties (epistemic) and uncertainties resulting from noise in the data (aleatoric).
Parameterizing Gaussian distributions on the weights of a network, however, doubles the amount of parameters that are utilized by the network. We therefore present a hybrid deterministic, probabilistic network structure that drastically reduces the amount of parameters while remaining its capability to capture model uncertainties.
unsrt
|
http://arxiv.org/abs/2306.01699v1
|
20230602171820
|
Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy
|
[
"Siamak Ghodsi",
"Eirini Ntoutsi"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] |
2023
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
EWAF'23: European Workshop on Algorithmic Fairness,
June 07–09, 2023, Winterthur, Switzerland
1,2]Siamak Ghodsi[
orcid=0000-0002-3306-4233,
[email protected],
url=https://siamakghodsi.github.io/,
]
[1]L3S Research Center, Leibniz Universität Hannover, Germany
[2]Freie Universität Berlin, Dept. of Mathematics and Computer Science, Berlin, Germany
3]Eirini Ntoutsi[
orcid=0000-0001-5729-1003,
[email protected],
]
[3]Research Institute CODE, Bundeswehr University Munich, Germany
Group imbalance usually caused by insufficient or unrepresentative data collection procedures, is among the main reasons for the emergence of representation bias in datasets. Representation bias can exist with respect to different groups of one or more protected attributes and might lead to prejudicial and discriminatory outcomes toward certain groups of individuals; in case if a learning model is trained on such biased data. In this paper, we propose MASC a data augmentation approach based on affinity clustering of existing data in similar datasets. An arbitrary target dataset utilizes protected group instances of other neighboring datasets that locate in the same cluster, in order to balance out the cardinality of its non-protected and protected groups. To form clusters where datasets can share instances for protected-group augmentation, an affinity clustering pipeline is developed based on an affinity matrix. The formation of the affinity matrix relies on computing the discrepancy of distributions between each pair of datasets and translating these discrepancies into a symmetric pairwise similarity matrix. Furthermore, a non-parametric spectral clustering is applied to the affinity matrix and the corresponding datasets are categorized into an optimal number of clusters automatically.
We perform a step-by-step experiment as a demo of our method to both show the procedure of the proposed data augmentation method and also to evaluate and discuss its performance. In addition, a comparison to other data augmentation methods before and after the augmentations are provided as well as model evaluation performance analysis of each of the competitors compared to our method. In our experiments, bias is measured in a non-binary protected attribute setup w.r.t. racial groups distribution for two separate minority groups in comparison with the majority group before and after debiasing. Empirical results imply that our method of augmenting dataset biases using real (genuine) data from similar contexts can effectively debias the target datasets comparably to existing data augmentation strategies.
Distribution Shift Affinity Clustering Bias & Fairness Maximum Mean Discrepancy Data Debiasing Data augmentation
Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy
[
July 31, 2023
========================================================================================
§ INTRODUCTION
Recent years have brought extraordinary advances in the field of Artificial Intelligence (AI) such that now AI-based technologies replace humans at many critical decision points, such as who will get a loan <cit.> and who will get hired for a job <cit.>.
There are clear benefits to algorithmic decision-making; unlike people, machines do not become tired or bored <cit.>, and can take into account orders of magnitude with more factors than people can. However, like people, data-driven algorithms are vulnerable to biases that render their decisions “unfair”. In automated decision-making, fairness is the absence of any prejudice or favoritism toward an individual or a group based on their inherent or acquired protected attributes such as `race' or `gender'. Thus, an unfair algorithm is one whose decisions are skewed toward a particular group of people.
One of the leading causes of unfair automated decisions in many real-world scenarios is due to unrepresentative, insufficient, or biased data fed to the learning algorithm <cit.>. Consequently, such biases can lead to certain discriminatory and prejudicial decisions harming sensitive groups e.g. racial/gender minorities in practice.
To overcome this issue, in this paper, we propose a mechanism for Minority Augmentation of biased datasets coming from separate but similar sources of data (described in the same feature space) through a Spectral Clustering scheme (MASC). The method proposes a way to augment underrepresented minority groups of an arbitrary task (hereafter we use the terms dataset and task interchangeably) by increasing their instances from a subset of contextually similar datasets that belong to the same cluster. Our proposed method performs an affinity clustering based on distribution discrepancy (that is used as a distance measure) among tasks to group similar tasks into a pre-defined number of clusters. Within each cluster, any member dataset can use instances shared by neighboring (mutually most similar) tasks to augment their underrepresented groups as compensation for group cardinality difference (that leads to representation and imbalance bias) according to a protected attribute.
Our main contributions can be summarized as follows:
* A new data augmentation framework for data debiasing towards statistical balancing between non-protected and protected group(s) based on most similar neighbors.
* Utilizing distribution shift metrics to quantify pairwise discrepancy between different datasets/ joint distributions
* A spectral clustering framework to group similar datasets based on the discrepancy between the joint distribution of these datastes.
* Clustering into an optimal number of clusters using a graph theoretic heuristic, known as ”Eigen-gap or Spectral-gap“ to avoid parameter selection and thus avoid any additional bias in the pipeline.
The rest of the paper is organized as follows:
Preliminaries, related works, and motivation are presented in Section <ref>. In Section <ref>, we present the proposed data augmentation pipeline. Experimental evaluation results are provided in Section <ref>, including an intuitive example of applying the proposed method. Finally, Section <ref> concludes this work, discusses its limitations and points out to future directions.
§ PRELIMINARIES AND RELATED WORKS
In this section, the necessary theoretical background and a brief literature review of these necessary notions are discussed.
§.§ Distribution Shifts
Distribution shift <cit.> is a broad topic studying how test data can differ from training data and how would such differences affect model performance. There are several possible causes for dataset shift, out of which two are deemed to be the most important reasons: Sample
selection bias and non-stationary environments according to <cit.>. The motivation for referring to notions of data distribution shift in our paper is to utilize measures of distribution shift that provide practical tools as well as rich theoretical backgrounds that enable us to quantify similarity (and/or distance) among pairs of datasets that we will later use for data debiasing. Next, we look at formal definitions and different types of distribution shifts.
If we consider a dataset (X,Y) to be a set of independent and identically distributed (i.i.d.) set of instances drawn randomly from an unknown continuous probability density function, then a classification problem is defined by a joint distribution P(X,Y) of features a.k.a. covariates X and target variables Y <cit.>. According to the Bayesian Decision Theory <cit.>, a classification can be described either by the prior probabilities of the classes P(Y) and the class conditional probability density functions P(X|Y) for all classes Y = 1,…, c where c is the number of classes or by the covariate probabilities P(X) and conditional probability density functions P(Y|X). Thus the joint distribution P(X,Y) can be decomposed in both following forms:
P(Y|X) = P(Y)P(X|Y)/P(X),
P(X|Y) = P(X)P(Y|X)/P(Y)
where P(X)=∑_Y=1^c P(Y)P(X|Y) and similarly P(Y)=∑_X=1^c P(X)P(Y|X) in P(Y|X) and P(X|Y) classification problems respectively. The two forms of problem formulation will be formalized as Y→ X and X→ Y (pronounced as Y given X) respectively in the rest of this paper.
The literature on distribution shift detection and adaptive learning domain indicates that there are three types of distribution shifts <cit.>:
* Covariate shift appears only in X→ Y problems when the probability of input features P(X) changes, but the decision boundary defining the relationship between covariates and target labels P(Y|X) remains the same. In other words, the distribution of the input changes, but the conditional probability of a label given an input remains the same. These shifts are known as “virtual shifts".
* Prior probability or target shift appears only in Y→ X problems when the probability of target labels P(Y) changes but P(X|Y) remains the same. For example, consider the case when the output distribution changes but for a given output, the input distribution stays the same.
* Concept drift basically can appear in both types of problems namely in problems of type X→ Y where the probability of P(Y|X) changes between train and test data or in Y→ X problems where P(X|Y) changes. Concept drift happens when the input distribution remains the same between the two datasets but the conditional distribution of the output given an input changes. In other words, the decision boundary defining the relationship between covariates and labels changes.
§.§ Quantifying Distribution Shift
We are interested in measuring distribution discrepancy between two datasets X∈ℝ^n × d, and Z∈ℝ^m × d defined over the same feature space of d features and having an arbitrary size of input samples.
Discrepancy between two datasets can be due to differences in their feature and label distributions, so the conditional probability of labels given an input can remain the same.
Since our goal is to develop a method to cluster similar distributions to enable us to augment a test dataset and also for the sake of maintaining the generality of the problem, we assume we don't have access to target labels and thus do not use prior probability information for shift quantification. As a result, we only use covariate distribution similarities. Moreover, we assume the similarity of the attribute space between different tasks; meaning that all the datasets have the same number of feature with the same range of values.
One of the most used measures for quantifying pairwise distribution differences is the Kullback-Leibler (KL for short) distance <cit.>. KL has nice theoretical properties, but it is not considered a metric as it is not symmetric (KL (P, Q)≠ KL (Q, P) if P, Q are probability distributions of covariate sets X and Z respectively) and it does not satisfy the triangle inequality <cit.>. A modified version of KL-divergence which belongs to a symmetrized sub-category of KL-divergence is the Jensen-Shannon divergence <cit.> which is a metric but still, as with all measures based on KL-divergence, it is sensitive to the sample size and requires both datasets to have the same cardinality.
Another well-known metric for measuring the distance between two distributions is the Maximum Mean Discrepancy (MMD for short) <cit.>. It is a multi-variate non-parametric statistic calculating the maximum deviation in the expectation of a function evaluated on each of the random variables, taken over a reproducing kernel Hilbert space (RKHS). MMD can equivalently be written as the L2-norm of the difference between distribution mean feature embeddings in the RKHS. In contrast to KL-Divergence, MMD is not sensitive to the number of instances and can be highly scalable to any arbitrary number of instances for each of the distributions depending on the kernel function employed for its calculation.
The MMD between two data distributions X∼ P and Z∼ Q is given by:
MMD (P,Q ) = ‖ μ_P - μ_Q ‖^2 _ℋ
Where μ_P is the kernel mean of X estimated using μ_P (ϕ(X)) = 1/n∑_i=1ϕ(x_i) and similarly μ_Q is kernel mean of Z assuming ϕ: X→ℋ to be a feature map embedding X to the embedding Hilbert space ℋ. Then Eq. <ref> can be substituted as:
MMD (P,Q ) = ‖ 1/n∑_i=1ϕ(x_i) - 1/m∑_i=1ϕ(z_i) ‖^2 _ℋ
The inner product (indicated by ⟨ ∙ ⟩ ) of feature means of X∼ P and Z∼ Q can be written in terms of the kernel function such that:
⟨ μ_P (ϕ (X )), μ_Q (ϕ (Z )) ⟩_ℋ = E_P,Q [ ⟨ ϕ (X ), ϕ (Z ) ⟩_ℋ ] = E_P,Q [ k(X, Z ) ]
Substituting Eq. <ref> into Eq. <ref> we can rewrite it such that:
MMD (P,Q ) = E_P [ k(X, X ) ] - 2 E_P,Q [ k(X, Z ) ] + E_Q [ k(Z, Z ) ]
Finally, expanding the Eq. <ref>, the two sample MMD-test can be calculated by:
MMD (X, Z) = 1/n (n-1)∑_i∑_j ≠ i k(x_i, x_j) - 2 1/n.m∑_i∑_j k(x_i, z_j)
+1/m (m-1)∑_i∑_j ≠ i k(z_i, z_j)
In <cit.> it is suggested to use linear statistic if the datasets are sufficiently large. Since our sample sizes are large enough, we use the linear kernel for MMD calculations in Eq <ref> in our experiments.
To avoid scale differences it is a good practice to normalize the values in the [0,1] range.
§ PROPOSED METHOD
In this section, the detailed procedure of the proposed MASC method is described in 4 steps. A procedural overview of each of these steps of the proposed data augmentation method is provided in Algorithm 1. Before approaching these steps with details, an overall overview of the process is discussed in the following.
ruled
Comment▹
kwInputInput
kwOutputOutput
Assume having r number of biased datasets D_all={ X_1 ∪X_2 ∪…∪X_r } according to r different tasks. For instance, these datasets could each belong to different branches of a franchised hypermarket or be from civil registration offices in different cities (or states) and many other similar cases. Nevertheless, Our final goal is to find a clustering of these datasets based on a similarity score such that in each cluster, tasks can share their instances. This way, an arbitrary dataset X_b ⊂D_all that is biased by over-representing a majority[ (majority, non-protected), and also (minority, and protected) groups will be used interchangeably in this paper.] group w.r.t. a protected attribute, can borrow instances of minority group(s) from its neighboring tasks and construct an augmented unbiased training set.
For the clustering procedure, a spectral clustering algorithm is utilized that can identify the optimal number of clusters automatically based on an Eigen-gap or Spectral-gap heuristic introduced in <cit.>. In order to perform the clustering step, initially we need to construct an affinity matrix from the pairwise distances that we obtain by MMD metric.
§.§ Affinity Matrix Computation
The first step in the proposed method is to compute pairwise distance (or discrepancy) between each pair of datasets X_i ⊂D_all and X_j ⊂D_all using Eq. <ref>. The distances are then transformed into a symmetric matrix of pairwise distances W∈ℝ^r× r such that the diagonal of the matrix is all zeros. An intuitive way to convert a pairwise distance matrix into an ”Affinity“ matrix is by applying a Radial Basis Function a.k.a. Gaussian Kernel <cit.>:
A (w_i, w_j ) =
exp(-γ ‖ w_i-w_j ‖^2 ), if i ≠ j
0, otherwise
where w_i and w_j are two entries of the distance matrix W. Eq. <ref> results in a weighted undirected symmetric affinity matrix A with zero diagonal elements with weights being Gaussian functions of the pairwise distances.
§.§ The Optimal Number of Clusters k
In order to perform spectral clustering on the affinity matrix, we need to calculate the unnormalized graph Laplacian <cit.> L = D- A where D_i, i=∑_j=1^rA_i,j is the diagonal degree matrix of the affinity matrix A. Graph Laplacian is key to spectral clustering; its eigenvalues and eigenvectors reveal many properties about the structure of a graph.
According to the ”Perturbation Theory“, an optimal number of clusters k for a dataset can be given through the eigengap identification of eigenvalues of the graph Laplacian, which is the largest difference between eigenvalues <cit.>. Thus, computing the eigenvalues of the Laplacian matrix and finding it's biggest gap can discover the optimal number of clusters. This way, one can avoid the difficult and tricky decision of the cluster number parameter. Thus, similar to the instructions in step 5 of <cit.> we perform a ”Singular Value Decomposition (SVD)“ to calculate the eigenvalues of the Laplacian matrix L:
L = U Σ V^T
where U, V are unitary matrices called left and right singular matrices, respectively containing eigenvectors corresponding to eigenvalues in Σ. Next, we create an eigengap vector e using the eigenvalues from Σ in Eq. <ref> as follows:
e = [λ_2-λ_1, λ_3-λ_2, …, λ_l-λ_l-1]
where λ_k, is the k-th sorted eigenvalue in ascending order. Note that, if (λ_k - λ_k-1) implies the largest difference i.e. eigengap according to Eq. <ref>, then index k is the optimal number of clusters.
§.§ Spectral Clustering
After obtaining k, the desired number of clusters in Section <ref>, there is one more step to finally be able to partition the affinity matrix. In this step, we find the top k eigenvectors u_1,…, u_k according to the top k smallest eigenvalues of the Laplacian, stack them as columns of a new matrix U∈ℝ^r× k such that:
U =
⋮ ⋮
u_1, …, u_k
⋮ ⋮≙min_1… kλ_k
where ≙ stands for the term ”Corresponding to“. Then, a k-means clustering <cit.> is performed on the rows of matrix U which is equivalent to a clustering of the r datasets:
C = Kmeans(U) = { C_1 ∪C_2 ∪…∪C_k } and k≤ r C≡D_all
and ≡ sign represents the equivalence of its operands. Note that, in practice, spectral clustering is often followed by another clustering algorithm such as k-means to finalize the clustering task. The main property of spectral clustering is to transform the representations of the data points of X_b into the indicator space in which the cluster characteristics become more prominent and passes much more processed/meaningful information to the next step clustering algorithm.
§.§ Data Augmentation Within Clusters
Now that the set of input tasks/datasets D_all is clustered into k partitions according to Eq. <ref>, the data augmentation process for minority group(s) can be fulfilled. If cluster c consists of t datasets:
C_c = { X_1 ∪…∪X_t } where t≤ r & c∈{ 1,…,k }
Initially, we create a pool of instances in the cluster C_c by collecting all the instances from each dataset belonging to the cluster. The number of instances in this cluster | C_c | = N, (where | ∙ | denotes cardinality) can be written as a sum of the number of instances belonging to each of the p protected groups ∑_i=1^p N_i= N. Given a protected attribute S={ S_1, …, S_p } with p groups and knowing | X | = n, the augmentation process for task X⊂D_all is a very straightforward process based on protected groups cardinality. We calculate the cardinality of each group corresponding to the number of instances belonging to that group such that:
| X_ S_i | = n_i for i ∈{ 1,…,p }
where ∑_i=1^p n_i= n. Next, we identify the biggest group and indicate it as the majority/non-protected group through a procedure like max ( n_1,…,n_p ) = n_l. Ideally, the intention would be to balance every minority subgroup g to have a cardinality as big as the majority group l, so that | X_ S_g | = n_l. Thus, every protected group needs to be augmented by a difference of n_l -n_g. However, it is only the case if the pool of shared protected group instances includes this number of instances otherwise we augment by as many instances as there exist in the shared pool. Thus, the augmented version of dataset X has the following number of instances depending on the number of shared instances:
X ← X∪⋃_j=1^(n_l - n_g)C_S_g(j) if N_g > n_l
C_S_g otherwise
where C_S_g = { C_c | S=S_g }. Note that, C_c and X_b are substituted by C and X respectively, to avoid syntax complication.
§ EXPERIMENTAL RESULTS
In order to evaluate the effectiveness of the proposed MASC method, in this section the conducted experimental results on a number of real-world datasets are analyzed. The organization of this section is as follows: First, details of the datasets employed are presented. Next, the evaluation measures used in the experiments and also methods used for comparisons are described. Finally, the experimental results and discussions on them are provided.
§.§ Datasets
To evaluate MASC's performance in addressing group imbalance and representation bias, we used the recently released US Census datasets <cit.>, which comprise a reconstruction of the popular Adult dataset <cit.>. These datasets provide a suitable benchmark with 52 datasets representing different states, effectively capturing the problem of group imbalance between states with varying numbers of instances but similar feature spaces.
The datasets <cit.> include census information on demographics, economics, and working status of US citizens. Spanning over 20 years, they allow research on temporal and spatial distribution shifts and incorporate various sources of statistical bias. As already mentioned in Section <ref>, in this study we assume that the conditional probability of labels given specific inputs remains constant. Therefore, we focus on the latest release, specifically the year 2019 (till the date of submission), and examine the spatial context to explore the connection between covariate shifts and bias.
The feature space consists of 286 features, with only 10 deemed relevant <cit.>. The target variable, Income Value, is transformed into a binary vector to predict whether an individual earns an income of more than 50k: Income ∈{≤50K, >50K}, the positive class being “>50K”. We selected “Race” as the protected attribute due to the challenge it poses compared to gender or age, given the highly imbalanced distribution of racial groups across states. The “Race” attribute has 9 categories, but due to a very small representation of seven of these categories which usually comprise less than 1% of the instances in the dataset, we aggregate them to a bigger group called “Other”. Thus, the categories in our experiments are aggregated into 3 groups: White, Black, and Other. Categorical features are transformed into numerical features and all the features are normalized by standard scaling using their mean (μ) and standard deviation (σ) values such that each z=(x-μ)/σ is a standard representation of its x and lies within the range [0,1].
Refer to Table <ref> for detailed information on the filtered (cleaned) datasets, including racial distribution, class imbalance ratio, name abbreviation conventions, and other details. The table summarizes information for 5 out of 51 datasets. The intuition behind this specific selection of states will be addressed in detail in Section <ref>. The datasets exhibit significant racial bias, with the White group representing the majority (also referred to as non-protected) in all 5 datasets.
§.§ Metrics
In this paper, we adopt five measures in total. We use accuracy <cit.> for analyzing models predictive performance along with four measures for bias and fairness quantification; Disparate Impact <cit.>, Statistical (or Demographic) Parity <cit.>, Equalized Odds <cit.>, and a new proportionality metric that we introduce, the Group Distribution Ratio for quantification of bias on datasets before and after debiasing. The measures that take into account model outcome or in other words, which involve model training and prediction (e.g. Accuracy and Equalized Odds) are not relevant for the first part of experiments. Given a dataset X={D, S, Y }, with D regular features, a protected feature S (i.e. Race) and a binary target class, the disparate impact (DI short for) of the given dataset is calculated as follows:
DI = P ( Y=1 | S=0 )/P ( Y=1 | S=1 )
which basically calculates the ratio of the probability of being a member of the protected group having positive outcomes to the probability of the non-protected group with positive outcomes. DI ranges between zero and one DI∈(0,2) with 1 being the best value i.e. implies there is no bias. 0, 2 mean maximum bias toward one group or the other respectively.
The measure statistical parity (SP for short) also computes a quite similar value, where it reflects the mentioned change as a difference instead of a ratio:
SP = P ( Y=1 | S=0 ) - P ( Y=1 | S=1 )
Since we consider two protected groups of ”Black“ and ”Other“ in our experiments, the results are calculated for each of the measures twice; for each of the two protected groups against the non-protected group of ”White“. So in our analysis S∈{ 0,1 ,2 }. SP takes values in the range SP∈(-1,1) with 0 as the best possible value implying zero bias.
The measure Equalized Odds (Eq.Odds for short) calculates the difference in prediction errors between the protected and non-protected groups for both classes as |δFPR | + | δFNR| where δFNR stands for ”False Negative Rates“ and δFPR stands for ”False Positive Rates“ that are also known as Equal Opportunity and Predictive Equality respectively. The δFNR measures the difference of the probability of subjects from both the protected and non-protected groups that belong to the positive class to have a negative predictive value and similarly, the δFPR calculates the difference of the probability of subjects from both the protected and non-protected groups that belong to the negative class to have a positive predictive value. So, the Eq.Odds is formulated as follows:
Eq.Odds = | P(Ŷ = 0|Y = 1, g = w) - P(Ŷ=0|Y = 1, g = b/o)| +
| P(Ŷ= 1|Y = 0, g = w) - P(Ŷ = 1|Y = 0, g = b/o)|
where Ŷ is the predicted label, Y is the actual label and g ∈ G = {w, b, o} is the protected attribute. The value range for each of δFNR and δFPR is [0,1], where 0 stands for a classifier satisfying perfectly the measure with no discrimination and 1 stands for maximum discrimination. Thus, Eq.Odds can range between [0,2]. In this study, w is taken as the majority (non-protected) group and b and o are minority (protected) groups.
Finally, we introduce a group-proportional measure: the group distribution ratio (GR for short) essentially calculates group imbalance or the proportion of instances belonging to each of the protected groups or the non-protected group w.r.t. the total number of instances in the dataset. Similar to the definition of protected attribute and its member groups in Section <ref>, the group distribution ratio for a protected group g is obtained as follows:
GR_g = P ( X | S=S_g ) = | X_S_g |/| X | = n_g/∑_i=1^pn_p
where the denominator of the fraction in Eq. <ref>, is the sum of the cardinality of all subgroups of task X or the total number of its instances ∑_i=1^pn_p = n. Clearly, the cumulative probability of all subgroups ∑_i^p P ( X | S = S_i ) is = 1. Thus, a dataset is group balanced w.r.t. a protected attribute if Eq. <ref> is proportionately equal for each subgroup. In other words, the optimal balance for each subgroup in the dataset is given by GR^∗ = 1/(∑_i=1^p S_i) which implies balanced groups with the same number of instances. As a result, for a protected attribute with two subgroups, the optimal group distribution ratio would be GR^∗=1/2 and similarly for a protected attribute with three subgroups GR^∗=1/3.
§.§ Competitors
In order to compare the results of our MASC augmentation method, we compare it with 4 different competitors including the original shape of untouched datasets along with three other strategies. Specifically, we use a variation of SMOTE <cit.> for synthetic minority protected group over-sampling instead of minority class augmenting, and similarly we use a variation of RUS <cit.>, as a random group under-sampling. In addition, we also introduce a natural geographical neighborhood augmentation by concatenating datasets within their local clusters of geographical neighbors based on the formal region categorization as in <cit.>. All the augmentation methods are also analyzed by feeding their outputs to a Logistic Regression classifier (LR for short).
Note that we implement a variation of SMOTE and RUS to over/under-sample based on the protected group distribution of the protected attribute such that: for SMOTE we over-sample both minority groups until their cardinality is as large as the majority group. For the RUS method, we under-sample the majority group and the bigger minority group until they contain as few samples as the smallest minority group.
§.§ Empirical Results
The forthcoming experiments in this section are conducted in order to compare the initial dataset biases of the original datasets before and after the proposed data augmentation and also in comparison with the three other augmentation strategies, respectively as mentioned in section <ref> based on the introduced measures in Section <ref>. Following that, we also compare predictive performance and fairness of a LR classifier on the different augmentation strategies to see how would each of the augmentation methods affect model performance[The source code of the proposed MASC and the comparisons can be found at: https://github.com/SiamakGhodsi/MASC.gitGithub/SiamakGhodsi/MASC]. Meanwhile, to get an intuition about the step-wise procedure of the proposed MASC method, a demonstration of implementation on the aforementioned US-Census datasets, following the steps in Section <ref> is illustrated before discussing performance results.
§.§.§ A Demo implementation
Initially, an affinity matrix according to steps 1-9 of Algorithm 1 is generated. Following that, based on instructions in lines 11-12 of Algorithm 1, initially a graph Laplacian and afterward its SVD decomposition are calculated from the affinity matrix, in order to obtain eigenvalues of the Laplacian and find the spectral eigengap as in steps 13-14. According to the spectral graph theory <cit.>, in an ultimately well-shaped problem, one can observe that there exists an ideal case of k completely disconnected components which constitute a block diagonal Laplacian matrix that has k zero eigenvalues and corresponding k eigenvectors of ones. In this extreme case, the (k+1)-th eigenvector which is non-zero, has a strict gap. This gap identifies the optimal number of connected components that can be clustered as highly similar objects. The eigengap heuristic is an advanced guide to avoid parameter selection, although our problem as well as the majority of real-world problems do not produce such a well-formed block diagonal Laplacian. In Figure <ref> the first ten eigenvalues and the major eigengap are demonstrated. It depicts that our datasets can be semi-optimally clustered into five categories.
We partition the obtained Affinity matrix into five clusters following instructions in Section <ref> and accordingly steps 16-17 of Algorithm <ref>. The clustering is illustrated in Figure <ref>. Each color represents a cluster.
Next, according to steps in Section <ref> and steps 18-26, we augment each of the input datasets using the shared protected-group instances from their neighbors in the same clusters. The results of the minority group(s) augmentation are summarized in Table <ref> and group distribution and GR scores are shown in Figure <ref>.
§.§.§ Results
The MASC is applied to all the input datasets and augments each task based on the cluster that they belong to and therefore the neighborhood instances that they share. Since the method indicates 5 clusters as depicted in Figure <ref>, for the sake of readability we evaluate the results for 5 of the datasets, each chosen from one of the clusters. Namely, the states are; Montana (MT), Mississippi (MS), North-Dakota (ND), Colorado (CO), and Maryland (MD). The selection of states within each cluster is based on diversity; we chose states from western and more central regions to northern and east-most states that allow comparing population texture of different regions of the US according to <cit.>.
In Figure <ref> the GR values according to the distribution of each of the original datasets is shown. Comparing them to distributions in Figure <ref> which is the results obtained by our method, the performance of our method in reducing the group differences is inferable. The proposed method borrows similar instances for each minority group from similar states and balances exactly perfectly four of the states and to a very good extent also the Colorado state. In Colorado's case, the number of minority group instances borrowed from other states in the cluster is not enough to equalize the representation of minority groups, but still decreases the worst imbalance in the original dataset in terms of difference between GR-values of Maj-min1 from 87.98%-2.56%≈85% to 49.16%-26.13%≈23% and reduces the 85% difference to 23%.
Table <ref> summarizes the evaluation of the five datasets based on previously introduced measures DI, SP, GR (refer to Section <ref> for details) comparing the results of original states with MASC, Geographical-neighborhood grouping, the SMOTE, and the RUS methods. Note that the Geographical-neighborhood augmentation is abbreviated as Geo-nei in the table. The Maj, Min1, and Min2 notations correspond to Majority (White) and two minority groups (Black and Other), respectively. It is empirically shown in the table that the results of the MASC, alleviate group imbalance to a good extent for all the datasets according to the GR column and subsequently achieve good DI and SP rates compared to the original datasets. Moreover, in comparison to Geo-nei, our method performs better for all the states except for the Min1 group in the Colorado state and still achieves much better balances for the protected groups but only the class distribution is slightly worse. Compared to SMOTE and RUS, our method performs comparably well in terms of GR-values but w.r.t. DI and SP metrics, the method reports slightly worse results that is because our method only balances out group distributions and doesn't take into account the distribution of target-class. However, in model performance, our method outperforms the SMOTE and RUS for all the states w.r.t. accuracy and eq.odds metrics that we will see in the followings. Also, there are some technical issues/limitations that may arise using the SMOTE and RUS methods which will be discussed more detail in Section <ref>.
In Figure <ref> the performance results of a LR model trained on each of the augmentation methods and tested on the corresponding are illustrated. Note that, there are two legends where the first one represents the three augmentations (including our method MASC) that are based on real (genuine) data and the other represents synthetic augmentation methods. In Figure <ref> the Eq.Odds values are shown where we can see the purple bar, representing our method MASC gets the best results for three states Montana, Mississippi, and North-Dakota as well as standing in the third best for two other states Colorado, and Maryland. Interesting observation comparing to results in Table <ref> where SMOTE and RUS had better DI and SP results, it is observed that in model performance analysis, along with Geographical neighbors our method outperforms the SMOTE and RUS in all the states (Except for Mississippi where Geographical neighbors augmentations stands slighly worse than RUS) for both the metrics, accuracy and eq.odds. In Figure <ref> where the accuracy results are compared, again the same situation is observed where MASC outperforms RUS and SMOTE and has the best accuracy in four of the states Montana, North-Dakota, Colorado, and Maryland and also stands in the third best for Mississippi.
§.§ Discussion
From an analytical perspective although our method MASC seems to stand statistically comparable to or lower than the SMOTE and RUS in terms of DI and SP in Table <ref>, but it outperforms both these methods in model performance results reported in Figure <ref>. The reason for the former is because in our experiments, we implement a version of SMOTE and RUS that statistically augment protected groups, but still w.r.t. model performance measures, their augmentation is not comparable to real-world (genuine) data augmentations (e.g. MASC and Geographical neighbors). In that case, in Figure <ref> and Figure <ref> it is observed that MASC and Geographical neighbors (except for one case) outperform in all cases the two synthetic augmentations and once more we can highlight the importance/difference of real-world (genuine) data augmentation compared to synthetic/generated data.
Moreover, there are ethical and technical issues with SMOTEing and RUSing for protected group imbalance augmentation. Starting with RUS: looking at Figure <ref> only 0.36%, and 1.35% of the population belong to the minority group1 (Black) of the states Montana and North-Dakota, respectively. For the cleaned dataset it is no more than 20, and 60 instances each. So, with such a small number of instances, it is very unlikely for any learning algorithm to produce reliable predictions while being imposed to test data. This was also observed in Figure <ref> where the Eq.Odds results of the RUS method always report one because it basically predicts all the under-sampled data to belong to the majority class. Subsequently, this lack of reliable performance might even get worse in cases where learning parameters are applied to out-of-distribution (OOD) data. An example of OOD is training on the augmented data (in our experiments are 2019 US-Census dataset) and then applying the model for future data, e.g. 2020, and later data of the same state. This is left as an open question to interested readers to test and analyze the results. Furthermore, another question is: what about inter-sectional groups when there is imbalance also w.r.t. more than one attribute; for example how would SMOTE and RUS perform if gender and ethnicity are studied simultaneously? For example, if only exists one instance of coloured-skin females within the 20 samples in Montana dataset, the algorithm will only learn to infer one class label among these group of instances which could lead to highly unreliable and deficient predictions on test data.
In case of SMOTE, it over-samples the minority group of 20 or 60 instances to generate hundreds of times more data. So, these synthetically generated data are only specifically applicable to this application because they need to be very carefully tailored for the application. This may describe the worse performance in Accuracy and Eq.Odds despite balancing the groups in training data perfectly (GR, DI, and SP measures) in the experiments. One of the limitations of SMOTEing is data types. How would it work with categorical data? One has to define a multi-valued vector of features and statistically over-sample the outnumbered categories while they are encoded numerically which results in severe performance deterioration because of much larger search space. However, our method is easily adaptable to categorical or other data types.
We would like to also highlight once again that in this study we only study the 2019 data so that conditions 2 and 3 of Section <ref> do not apply to our analysis. In future works, it can be studied also where the distribution of target class (condition 2) or when the decision boundary changes (condition 3) which can happen when analyzing different historical records for each state, e.g. comparing the 2014-2019 data of each state. Also it is worth mentioning that there is a lack of similar datasets especially from the European countries that can be provided for research which can open up space for more studies in this direction.
§ CONCLUSION
In this paper, we propose a spectral clustering-based methodology to tackle data representation and protected-attribute group-imbalance biases. The motivation for developing this pipeline is to utilize contextually similar but separate datasets coming from similar sources, to augment one another in order to provide unbiased or less biased training sets using shared instances from contextually similar neighboring datasets. Our MASC approach identifies an optimal number of clusters based on inherent similarities of the input tasks and clusters them according to a robust and scalable MMD two-sampled test. Furthermore, it categorizes similar tasks based on their pairwise distribution discrepancies in a kernel-based affinity space. Experimental results on New Adult datasets reveal the promising performance of the proposed MASC in dataset debiasing and superior performance in improving predictive and fairness of learning models trained using the augmented training sets obtained by it. Moreover, it is preferable over synthetic data augmentation methods such as SMOTE and RUS since it augments based on genuine (real) existing data in contrary to the synthetic ones which are usually used under many ethical concerns. In future work, we will study the effect of normalized spectral clustering on the size and shape of clusters produced. We also encourage to extend our analysis to temporal aspects of the datasets by assuming change in the conditional probability of outcomes P(Y|X) in X→ Y problems for each year of the input datasets. Another interesting study would be to compare our method and the Geo-neib with a version of the SMOTE and RUS for multi-class imbalance or regression problems where the targets are multi-class or continuous.
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under Marie Sklodowska-Curie Actions (grant agreement number 860630) for the project ‘’NoBIAS - Artificial Intelligence without Bias’’. This work reflects only the authors’ views and the European Research Executive Agency (REA) is not responsible for any use that may be made of the information it contains.
|
http://arxiv.org/abs/2306.03010v1
|
20230605162533
|
Interval Load Forecasting for Individual Households in the Presence of Electric Vehicle Charging
|
[
"Raiden Skala",
"Mohamed Ahmed T. A. Elgalhud",
"Katarina Grolinger",
"Syed Mir"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY"
] |
§ INTRODUCTION
Affordable and reliable sources of electricity enable the sustainable growth of strong economies and can improve the average person’s quality of life <cit.> by providing reliable access to appliances, medical equipment, communication, entertainment, and other devices. The dependence on power grids to provide electricity is increasing due to the continuous integration of novel electronic devices into every aspect of modern life <cit.> as these devices rely on a reliable source of electricity. The mainstream adoption of electric vehicles (EVs) away from traditional internal combustion engine (ICE) vehicles for consumer use is set further to entrench reliance on access to electricity due to the increased electricity demand for charging. While this transition can be a positive step in reducing carbon emissions , embracing EVs will shift transportation energy requirements from petroleum-based products to electric grids. Of specific interest to this paper, is the demand created by charging EVs in residential households, which is frequently utilized due to its relative affordability and convenience.
As countries, such as Canada, plan to ban the sale of new ICE vehicles by 2035 <cit.>, preparations are required to ensure the success of this shift. This includes actions such as installing charging stations, increasing electricity generation capacity, investing into battery technologies, and improving infrastructure throughout the grid to handle the higher loads required by EV charging. The capability of electricity distribution companies to accurately forecast the hourly electricity consumption of residential households that own EVs is instrumental in the transition to EVs as it assists the utility companies to anticipate and manage increased energy demand, plan sufficient capacity to meet expected demand, and ensure grid stability. Failure in predictive ability poses a risk to the balance between electricity supply and demand, which can impose serious threats to grid stability, human life, and overall economic interest . A loss in the balance between supply and demand can lead to power outages, brownouts, and other disruptions. In turn, these electricity disruptions can severely disrupt critical infrastructures including transportation, communications, and financial systems, as well as essential services such as emergency response and healthcare.
There have been extensive efforts to create predictive load forecasting models using machine learning (ML) with historical energy consumption data collected by smart meters or similar technologies, often combined with meteorological information
. In recent years, deep learning techniques, especially those based on Recurrent Neural Networks (RNNs) have been outperforming other techniques . While these studies had great successes in terms of accuracy for a variety of use cases, they do not specifically address short-term forecasting for individual households in presence of EV charging <cit.> which introduces challenges due to variations of charging patterns.
Moreover, most predictive models for load forecasting generate point predictions instead of an interval for their expected electricity demand <cit.>, which limits the usefulness of the forecast in decision-making. By providing only a single value for expected electricity demand, these models fail to convey the range of potential outcomes and the degree of uncertainty associated with each prediction. Furthermore, providing only a point forecast, without the range values, may not offer sufficient information for effective risk management. In contrast, interval forecasting approaches provide decision-makers with a more nuanced understanding of the possible outcomes, enabling them to make more informed and effective decisions. For example, considering the full range of possible outcomes, instead of a single value, allows the stakeholders to plan for different scenarios and better mitigate risks.
To address these drawbacks, this paper proposes a probabilistic interval forecasting approach for predicting the hourly electricity demand in households with EV charging. By using probabilistic methods, our approach generates a range of likely outcomes rather than a single-point estimate which provides a more comprehensive understanding of the potential effects of EV charging on household electricity demand, gives information about the uncertainty associated with the predicted value due to dynamic charging behaviors, and offers decision-makers a more complete picture of the forecasted demand. The interval predictions are generated with Long Short-Term Memory Bayesian Neural Networks (LSTM-BNNs). LSTM was chosen as it is well-suited for capturing temporal dependencies in data while BNN was added to estimate the probability distribution of expected values for interval predictions. LSTM-BNN was trained using historical household electricity consumption data and local temperature data. To assess the effectiveness of the proposed LSTM-BNN model, its performance, measured using four metrics, is compared to the performance of the standard point prediction LSTM model. Additionally, due to the impact of the COVID-19 pandemic on electricity consumption patterns, the point and interval models have been examined on two datasets: one with the lockdown period and one without. The results show that the accuracy greatly varies among households, but for each household, the proposed LSTM-BNN achieves similar accuracy to point forecasts while providing the advantage of prediction intervals.
The remainder of the paper is organized as follows: Section 2 provides background information on LSTM and BNN techniques and introduces the four common performance measurements used for gauging the effectiveness of regression models while Section 3 reviews related work on load forecasting and interval forecasting. The proposed LSTM-BNN interval forecasting approach is described in Section 4 followed by the evaluation presented in Section 5. Finally, Section 6 concludes the paper.
§ BACKGROUND
This section begins by introducing Long Short-Term Memory (LSTM) networks and Bayesian Neural Networks (BNNs), followed by a discussion of the four performance measures commonly used for assessing regression models.
§.§ Long Short-Term Memory Neural Network
Neural networks are a type of machine learning model inspired by the human brain: they use interconnected artificial neurons to learn and process information by mimicking the way biological neurons signal to one another <cit.>. A recurrent neural network (RNN) is a type of neural network designed to process sequential data by using internal memory and recurrent connections, allowing it to capture temporal dependencies and patterns in the data. A Long Short-Term Memory (LSTM) neural network model is similar to RNN models in that it can capture temporal relationships by using an internal memory mechanism to keep track of past inputs and selectively remember or forget certain information. The main difference between an LSTM and RNN model is that LSTM models have additional structures, such as gating mechanisms, that provide better control over the flow of gradients and help prevent the vanishing and exploding gradient problems that can occur in standard RNNs, making them more effective for modeling longer sequences of data. LSTM computation at time t is given as follows:
f_t = σ(W_fxx_t + W_fhh_t-1 + b_f)
i_t = σ(W_ixx_t + W_ihh_t-1 + b_i)
o_t = σ(W_oxx_t + W_ohh_t-1 + b_o)
C̅_t =φ(W_cxx_t + W_chh_t-1 + b_c)
C_t =f_t⊙ C_t-1 + i_t⊙C̅_t
h_t = o_t⊙φ(C_t)
Equations (1)–(3) depict the computation at the forget f_t, input i_t, and output o_t gates respectively, while Equations (4)–(6) determine the cell state C_t and hidden state h_t. The sigmoid (σ) and tanh (φ) functions contribute to controlling exploding gradients by keeping values between zero to one and negative one to one respectively. The current cell input x_t and previous cell hidden state h_(t-1) are the inputs received by the LSTM cell. The gate biases b_f, b_i, b_o, and b_c, the current cell weight matrices W_fx, W_ix, W_ox, and W_cx and the hidden state cell weight matrices W_fh, W_ih, W_oh, W_ch of each LSTM cell are adjusted throughout the training process using backpropagation through time with the goal of minimizing the loss between the predicted and true values. The use of ⊙ indicates computing the elementwise Hadamard product of two matrices.
Due to its ability to capture temporal dependencies over long periods of time, the LSTM model has been very successful in many domains including load forecasting . For the same reason, we use the LSTM cells in the proposed LSTM-BNN interval forecasting approach.
§.§ Bayesian Neural Network
The Bayesian Neural Network (BNN) model <cit.> relies on Bayesian inference to determine the posterior predictive distribution with the ultimate goal of quantifying the uncertainty introduced by the models so as to explain the trustworthiness of the prediction. This is achieved by incorporating previous inputs X and outputs Y as well as model parameters ω in Bayes' theorem as follows:
P(ω|X, Y) = P(Y|X, ω)· P(ω)/P(Y|X)
Here, P() indicates the probabilities and P(·|·) are conditional probabilities.
By computing the integral of the full posterior distribution, given in (<ref>), multiple times using different samples from the model parameters, a distribution can be generated for a predicted value y_new using new inputs x_new:
P(y_new|x_new, X, Y) = ∫ P(y_new|x_new, ω)· P(ω|X,Y)dω
However, due to the full posterior probability being computationally demanding for deep neural networks, alternative approaches are required to make the use of Bayesian inference feasible in practice. Zhang and Mahadevan <cit.> demonstrated Monte Carlo dropout remaining active while a network generates predictions to be sufficient for approximating the posterior predictive distribution as it minimizes the relative entropy between the approximate and true posterior distributions while remaining computationally feasible. Consequently, our approach takes advantage of BNN and the dropout technique to generate interval load forecasts for households with EVs.
§.§ Performance Metrics
The four prominent performance metrics that are used for evaluating the margin of error between a prediction made by a machine learning model and the true value are: Mean Absolute Percent Error (MAPE), Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE) .While there are other metrics specifically dealing with evaluating probabilistic forecasts , we primarily use the mentioned metrics as they allow us to compare point and interval forecasts. These metrics are calculated according to the following equations:
MAPE = 100%/N∑_i=1^N| y_i - ŷ_̂î|/y_i
MSE = 1/N∑_i=1^N (y_i - ŷ_̂î )^2
RMSE = √((1/N)∑_i=1^N(y_i - ŷ_̂î)^2)
MAE = 1/N∑_i=1^N | y_i - ŷ_̂î|
where y_i is the true value of the i-th sample, ŷ_̂î is the predicted value for the i-th sample and N is the total number of samples.
MAPE has an advantage over the other three metrics as it is a scale-independent metric representing the error as a percentage of the actual value and therefore suitable for comparing models on datasets of different value scales. MSE and RMSE metrics are both based on the Euclidean distance to determine the level of error between predicted and true values. The difference between the MSE and RMSE metrics is that MSE provides more severe punishment for predictions that are very different from the true value. MAE is used to measure the mean absolute difference between predictions and true values and is less severe at penalizing large differences between predicted and true values than MSE or RMSE. To obtain a different view of forecasting accuracy, our study employs all four metrics.
§ RELATED WORK
This section first reviews recent load forecasting studies focusing on those based on machine learning and then discusses techniques for interval predictions in different domains.
§.§ Electricity Load Forecasting
This subsection first reviews recent load forecasting studies for a diversity of consumers including residential households and buildings. This provides insights into state-of-the-art models
and represents directions for forecasting in the presence of EV changing. Next, related work in predicting EV charging in various settings is examined.
An LSTM-based model for short-term load forecasting on the individual household level was proposed by Kong et al. <cit.>. They <cit.> found that a significant hurdle to creating forecasts at the household level is the large degree of diversity and volatility in energy consumption between households when compared to making forecasts at the substation level. This difficulty in residential forecasting due to load variability and concept drift also aligns with the findings of Fekri et al. <cit.>.
Residential load forecasting was also investigated by Zhang et al. <cit.>: while Kong et al. <cit.> used LSTM-based approach Zhang et al. <cit.> employed Support Vector Regression (SVR). In their study, Zhang et al. <cit.> investigated predicting daily and hourly electricity consumption for 15 households with data obtained from smart meters. The accuracy of the load predictions varied significantly across households, depending on the variability of energy-related behaviors among occupants. Daily load estimates were generally more accurate, as they mitigated the randomness in hourly changes.
L’Heureux et al. <cit.> presented tranformer-based architecture for electrical load forecasting. They adapted the transformer model from the Natural Language Processing (NLP) domain for load forecasting by modifying the NLP transformer workflow, adding N-space transformation, and designing a novel technique for handling contextual features. They examined the proposed transformer-based architecture on 19 different data streams and with four different forecasting horizons. For most data streams and forecasting horizons, the transformer accuracy was better than the Seq2Seq network; however, for 12-h forecast Seq2Seq was slightly better
A multi-node load forecasting was investigated by Tan et al. : they proposed multi-task learning combined with a multi-modal feature module based on an inception-gated temporal convolutional network for node load prediction. The feature extraction module captures the coupling information from the historical data of the node, while the multi-task learning utilizes a soft sharing mechanism to leverage the shared information across nodes to improve the forecast accuracy. Experimental results demonstrate the effectiveness of the proposed method in accurately forecasting load demand across multiple nodes.
Ribeiro et al. investigated short- and very short-term load forecasting for warehouses and compared several machine learning and deep learning models including linear regression, decision trees, artificial neural networks, and LSTM models. In their experiments RNN, LSTM, and GRU cells achieved comparable results.
Jian et al. also worked on very short-term load forecasting: they proposed a framework based on an autoformer which combines decomposition transformers with auto-correlation mechanism. Multi-layer perceptron layers are added to the autoformer for an improved deep information extraction. In their experiments, the proposed deep-autoformer framework outperformed several deep-learning techniques on the task of very short-term residential load forecasting.
An encoder-decoder RNN architecture with a dual attention mechanism was proposed by Ozcan et al. <cit.> to improve the performance of the RNN model. The attention mechanism in the encoder helps identify important features whereas the attention in the decoder assists the context vector and provides longer memory. In their experiments, the encoder-decoder RNN architecture achieved improved accuracy in comparison to LSTM; however, the computation complexity was increased.
Short-term load forecasting has been investigated by Sun et al. <cit.>: they proposed a framework based on LSTM and an enhanced sine cosine algorithm (SCA). The authors enhanced the performance of the SCA, a meta-heuristic method for optimization problems, by incorporating a chaos operator and multilevel modulation factors. In experiments, they compared the modified SCA with several other population intelligence algorithms including particle swarm optimization and the whale optimization algorithm and showed that SCA improves performance.
There are very few studies concerned with load forecasting for EV charging demand and they mostly consider scenarios such as parking lots, fleets, and regional demand. For example, Amini et al. <cit.> investigated forecasting of EV charging demand for parking lots. Their approach used an Autoregressive Integrated Moving Average (ARIMA) model with driving patterns and distances driven as inputs to determine the day-ahead demand of the conventional electrical load and charging demand of EV parking lots. Two simulated test systems, 6-bus and IEEE 24-bus systems, were used to examine the effectiveness of the proposed approach.
Yi et al. highlighted the importance of accurate demand forecasting for planning and management of electric vehicle charging infrastructure. They presented a deep learning-based method for forecasting the charging demand of commercial EV charging stations by utilizing LSTM as a base for the Seq2Seq model and combining it with a clustering technique. The evaluation on over 1200 charging sites from the State of Utah and the City of Los Angeles showed that the proposed method outperforms other forecasting models such as ARIMA, Prophet, and XGBoost.
For forecasting EV charging demand at charging stations in Colorado, Koohfar et al. proposed a transformer-based deep learning approach. The proposed approach was compared to time-series and machine learning models including ARIMA, SARIMA, LSTM, and RNN. While for longer time horizons the transformer outperformed other techniques, for short-term forecasting (7 days ahead), LSTM and transformer achieved comparable results.
A multi-feature data fusion technique combined with LSTM was proposed by Aduama et al. to improve the EV charging station load forecasting. They generate three sets of inputs for LSTM consisting of load and weather data pertaining to different historical periods. These three sets of data are then passed to the LSTM models which generate three predictions, and, finally, the LSTM outputs are combined using a data fusion technique. In their experiments, the proposed fusion-based approach achieved better accuracy than traditional LSTM in predicting EV charging station demand.
Zheng et al. <cit.> were interested in predicting the overall load from EVs in the city of Shenzhen, China. They recognize the diversity of charging patterns and therefore break down the fleet into four groups: private EVs, taxis, busses, and official EVs. Their approach provides a mid-and-long term EV load charging model based on the current utilization of EVs in Shenzhen using probabilistic models for EV charging profiles and forecasting EV market growth in the city using the Bass model. As they are concerned with the regional EV demand, some of the randomnesses of the individual EV charging is remedied through aggregation. Similarly, Arias and Bae <cit.> considered forecasting load for groups of EVs. Specifically, they take advantage of historical traffic data and
weather data to formulate the forecasting model. First, traffic patterns are classified, then factors influencing traffic patterns are identified, and finally, a decision tree formulates the forecasting model.
Strategies for handling growing EV charging demand were investigated by . They classify the EV control strategies into scheduling, clustering, and forecasting strategies recognizing that precise estimates of charging are critical for fault prevention and network stability. They note that the stochastic nature of EV charging demand requires advanced forecasting techniques, commonly combined with the need for extensive data including historical charging data, weather, and travel patterns, which may not be readily available. Forecasting studies Al-Ogaili et al. <cit.> examined include predictions for groups of EVs or geographical regions, charging stations, and specific types of EVs (e.g., busses).
The reviewed studies <cit.> on generic load forecasting represent the state-of-the-art in energy forecasting but their behavior in presence of EV charging has not been examined. Nevertheless, they represent a great foundation for forecasting EV charging load. On the other hand, EV-related studies <cit.> do consider EV charging but they do so for a group of EVs, parking lots, charging-station, or regions, and do not confider forecasting load for individual households in presence of EVs. In contrast, we focus on predicting power consumption for individual households in presence of EV charging. Moreover, in contrast to point predictions provided in the aforementioned studies, our study offers interval predictions.
§.§ Interval Predictions
This subsection reviews approaches that have been taken by authors across different domains to create regression models that provide an interval for predictions. In contrast to point predictions, interval predictions quantify uncertainties and provide additional information for decision-making.
Interval predictions were generated for electricity spot pricing by Maciejowska et al. <cit.> for the British power market using factor quantile regression averaging. First, point predictions are obtained with a collection of models including autoregressive models, threshold autoregressive models, semiparametric autoregressive models, neural networks, and others. Next, point predictions generated by the mentioned models are combined using quantile regression averaging to provide final interval forecasts. The proposed approach performed better than the benchmark autoregressive model.
Shi et al. <cit.> considered interval predictions for forecasting wind power generation to quantify uncertainties in renewable energy generation. They train an RNN model with two outputs, one for the upper and one for the lower bound of a regression interval of predictions using the Lower and Upper Bound Estimation (LUBE) method. A new cost function incorporating prediction interval was designed and the dragonfly algorithm was introduced to tune the parameters of the RNN prediction model. One of the major challenges associated with training neural networks using the LUBE method is the difficulty in achieving convergence and occasionally the model may not converge <cit.>. Consequently, Kabir et al. <cit.> developed a customizable cost function to improve the convergence of LUBE models and assist in constructing prediction intervals with neural networks.
Zhang and Mahadevan <cit.> proposed interval forecasting for flight trajectory prediction and safety assessment by combining deep learning with uncertainty characterized by a Bayesian approach. Two types of Bayesian networks (BNN), feedforward neural networks and LSTM networks, are trained from different perspectives and then blended to create final predictions. In both BNNs, the dropout strategy quantifies model prediction uncertainty. The BNN approach was also successful in the work of Niu and Liang <cit.> where they improve nuclear mass and single-neutron separation energy prediction accuracy for determining nuclear effective reactions. In their experiments, Niu and Liang <cit.> demonstrate that a Bayesian approach can be combined with various forecasting techniques to improve nuclear mass predictions.
The reviewed studies <cit.> created interval predictions with various machine learning and statistical methods in various domains; however, none of them considered forecasting household electricity load in the presence of EV charging.
Like our study, the works of Zhang and Mahadevan <cit.> and Niu and Liang <cit.> also employed BNN techniques to create interval prediction but they used it for very different use cases than load forecasting (flight trajectory <cit.> and nuclear mass predictions <cit.>).
§ INTERVAL LOAD FORECASTING IN PRESENCE OF EV CHARGING
This section presents the problem formulation and methodology of the proposed interval forecasting for household load prediction in the presence of EV charging. The approach uses only historical energy consumption data obtained from smart meters and weather data which makes it practical and scalable for real-world applications as there is no need to collect data regarding EV charging habits or EV specifications.
Problem Statement:
Consider a time series of historical data for a household with EV charging represented as a sequence of input-output pairs (x_t, y_t) where x_t is a vector of features describing the state of the electricity consumption at time t including contextual factors such as temperature, time of day, day of the week, and day of the year, and y_t is a vector of real-valued electricity consumption values for this household at time t. The goal is to learn a probabilistic model p(ŷ_t+1|x_t+1,D), where D represents historical observations and ŷ_t+1 represents the predicted value, that can predict the output for a new input with uncertainty quantification represented as an interval I.
This interval is created by generating multiple predictions through different network configurations to obtain the Bayesian approximation of the predicted value (interval center) as shown in Equation . The minimum and the maximum of the interval are computed as shown in Equations and respectively:
E[ŷ_t+1]=1/N∑_i=1^Nŷ_t+1^i
I_min=E[ŷ_t+1]-σ_ŷ_t+1
I_max=E[ŷ_t+1]+σ_ŷ_t+1
where N is the number of predictions generated for the time step (t+1) and σ_ŷ_t+1 is the standard deviation of the predictive distribution computed as follows:
σ_ŷ_t+1=√(∑_i=1^N(ŷ_t+1^i-E[ŷ_t+1])^2/N)
The overall interval forecasting process is shown in Figure , while details of each component are described in the following subsections.
§.§ Dataset Preparation
The two types of datasets being used are weather station data and historical household electricity consumption data. Each dataset undergoes preparation individually before they are merged.
The weather station data consists of multiple datasets from multiple weather stations in the approximate geographical areas surrounding the EV household. The features used from the weather station data are the hourly timestamps and the temperature recordings as the temperature is often considered the most influential weather factor in load forecasting <cit.>. The Weather Data Preparation conducted on the individual weather station datasets and shown in Figure <ref> includes filling in missing temperature readings and combining all the weather station data. Missing temperature readings are filled using weighted averaging of the nearest complete temperatures. Since beyond the city, there are no additional geographical details given for any of the EV households, the temperature data from all stations are combined by averaging the temperatures from several weather stations to create a single average temperature dataset. In order to match the timestamps in the EV Household datasets, the timestamps in the average temperature dataset are adjusted to adhere to daylight savings time (DST).
The household data here refers to hourly data recorded by the smart meter or a similar device. The two initial features in this data are the consumption period and the electricity consumed within that period. The consumption period for the household data initially contains both the start date and time, and the end date and time of the current electricity consumption period. These data, as indicated in Figure <ref>, undergo Household Data Preparation which involves isolating and only keeping the start date-time of the consumption period so that it can be merged with the weather station data. The electricity consumption feature from the initial dataset remains unchanged.
As part of the processing for both the weather station and household data, an additional time feature is generated. This feature is necessary because at the end of DST each year, the time is set backward one hour, resulting in two instances of the same date-time. This creates a conflict in merging the weather station and household data using only the date-time feature, as there are duplicate non-unique date-times that have no distinguishing differences. The additional time feature is added to both the household and weather station data to indicate if the specific date-time falls in DST or not. This removes the merging ambiguity for the duplicate date-times, as only the first occurrence of the date-time will occur during DST.
After the initial preparations are completed, each of the individual household datasets is merged with the average temperature dataset using the date-time and the additional time feature. In preparation for machine learning, the merged dataset proceeds to the preprocessing step.
§.§ Preprocessing
After the weather and household datasets are merged, the datataset undergoes the following preprocessing steps: feature engineering, splitting the data into train and test sets, and normalizing the train and test sets. Feature engineering step takes advantage of the recorded interval start date-time from the original household data to generate nine features as shown in Table <ref>. The purpose of creating additional features is to provide context information to the machine learning model, enabling it to generate better predictions.
Following feature engineering, the dataset was split into training and testing sets: the last 10% of readings are assigned to the test set and the remaining to the training set.
Then, a portion of the training set was separated to use as the validation set for model selection. As a result of the validation set creation, the distribution of the data becomes 80% for training, 10% for validation, and 10% for testing.
Next z-score normalization was used to reduce the dominance of large features and improve convergence. This technique was chosen over other normalization techniques as it is good at handling outliers present during peak electricity consumption events. The z-score normalization transforms the feature to the mean of 0 and the standard deviation of 1 as follows:
z_ij= x_ij-μ_j/σ_j
where x_ij and z_ij are the initial unscaled and scaled values of the i-th sample of the j-th feature respectively, and μ and σ are the mean and standard deviation of all the samples of the j-th feature respectively. Note that the mean and standard deviations are calculated only on the training set to avoid data leakage.
Next, the sliding window technique is employed to prepare data for the machine learning model and to provide the model with a fixed number of previous electricity consumption, time-date, and temperature features as the inputs for predicting the next time step. This is accomplished by creating an input window that contains all features including the time-step, temperature, and consumption data within the window size <cit.>. For instance, for a window of size w, electricity consumption together with all other features for the past w time steps are used as the input for predicting the next energy consumption values. The window slides for s steps to create the next sample. The advantage of the windowing technique for electricity forecasting is in allowing the model to consider the demand for recent time steps when making predictions. The exact window size w is determined within the optimization process.
The siding window technique is applied to each training, validation, and test set. After this step, the samples have the dimension of w × f where f is the number of features.
§.§ Training and Tuning
As shown in Figure <ref>, the Training and Tuning stage follows the Preprocessing step. The deep learning technique LSTM was selected as the machine learning model because in recent years it has demonstrated great successes in load forecasting and outperformed other forecasting techniques <cit.>. The hyperparameter search was carried out with Bayesian optimization as unlike grid or random search, this method performs a more directed exploration of a defined tuning space by selecting hyperparameters that lead to a local, ideally global, minimum loss <cit.>. The Bayesian optimization achieves a minimum loss by using the posterior distribution of the Mean Squared Error (MSE) loss function determined by previous models to guide the selection of new hyperparameter combinations. This directed selection process minimizes the time and computational resources needed for the exploration of the defined hyperparameter space <cit.>.
In this work, the search space explored included window size, batch size, the number of LSTM layers, the number of neurons in each LSTM layer, the learning rate for the Adam optimizer, and the dropout probability. The window size determines the number of previous time steps to be used as the input to the network while the batch size specifies the number of training windows a single batch contains. The number of LSTM layers and the number of LSTM neurons are adjusted to find a balance between increasing model complexity and variance to fit the training set while maximizing the model’s ability to generalize and make accurate predictions when given novel data points. The learning rate for the Adam optimizer is a critical parameter for training each model as it determines the rate at which updates are made to the weight and bias parameters of the model. Using a learning rate that is suitable for finding minima in the loss function enables the model to converge efficiently.
Finally, the dropout probability hyperparameter is used to prevent overfitting: it determines the probability that neurons in a layer will randomly be given zero values. Using a dropout probability that is too high can have a detrimental effect on overall performance as it could result in too many inactive neurons and prevent the model from learning. As the dropout technique is typically used only to reduce overfitting to training data, it is typically disabled when the model is making predictions (inference time). However, within the proposed LSTM-BNN model, dropout is also active while making predictions as it is the key component to creating the probabilistic interval predictions as described in the following subsection.
The LSTM model is trained and tuned using Bayesian optimization for each individual household independently. Once training and tuning are completed, the model is ready to proceed to the Interval Forecasting step.
§.§ Interval Forecasting
This subsection described how the trained and tuned LSTM model is used to create the prediction intervals. The approach is inspired by the works of Zhang and Mahadevan <cit.> and Niu and Liang <cit.>, and as those works, it employs the BNN technique to generate intervals. However, they used the BNN technique for different applications and with different networks.
With the active dropout, the trained model makes a sufficiently large number of predictions for each sample of the dataset. Due to dropout being active, the model has a high likelihood to produce a different point prediction each time it makes a prediction even though all inputs are the same. This variation in predictions is because while the dropout is active there is a probability that any component, excluding input and output neurons, can be removed from the prediction calculation. The varying point predictions due to the use of different components in prediction calculations allow for the construction of an interval prediction as a variational approximation of Bayesian inference for the model uncertainty <cit.>.
After multiple predictions were made for the same input samples, the mean and standard deviation for each sample is determined using the collection of point predictions the model created. Finally, the interval prediction is given as one standard deviation above and below the mean value of the point predictions for each sample.
A summary of the four steps taken in creating the interval prediction is given in the following steps:
* Make multiple predictions for a given input.
* Compute the mean and standard deviation of the predictions for each input sample.
* Center the interval at the mean value.
* Define the upper bound of the interval as one standard deviation above the mean, and the lower bound as one standard deviation below the mean.
All the models generated by Bayesian optimization are evaluated on the training and validation sets while only the best-performing model for each household selected on the validation set is evaluated on the test set. In other words, the model selection is carried out on the validation set.
§.§ Statistical Tests
Household energy consumption is dependent on the behaviors of its occupant, and as such changes when those behaviors change. We are interested to examine the effect the COVID-19 related pandemic lockdowns had on households with EV charging. As the test showed that the datasets do not follow a normal distribution, electricity consumption with and without lockdown in order to determine if lockdowns created a change in household electricity consumption habits to the extent that it is statistically different and could impact the predictive capacity of the model.
To carry out this analysis, two datasets are considered: the first dataset, referred here as the lockdown dataset
, contains the entire original EV household dataset including data from the lockdown as well as before lockdowns. The second dataset, non-lockdown dataset is a subset of the EV household dataset containing only data collected outside of the lockdown period. Both the lockdown and non-lockdown datasets go through the same preparation, preprocessing, and prediction processes for the creation of the prediction model.
As seen in Figure <ref>, evaluation using is carried out before the normalization is applied. The test is performed after the dataset is split into components training, validation, and testing. A comparison of the differences in the results helps us determine if there is a greater similarity between training and test sets for the lockdown dataset or for the non-lockdown dataset which is a subset of the lockdown dataset.
There are three possible outcomes for the comparison of the results for the lockdown and non-lockdown datasets. The first is that there is no significant difference between training and test sets for either the lockdown or non-lockdown datasets. The second possible outcome is that there is a greater difference between the lockdown dataset training and test sets than for the non-lockdown dataset. And the third possibility is that there is a greater difference between the non-lockdown dataset training and test sets than for the lockdown dataset.
The results are compared with the final performance of the models that are trained on the lockdown and non-lockdown datasets to observe whether there is a correlation between differences in the datasets and model predictive performance. The analysis of the results and the predictive performance of the model will improve our understanding of the conditions under which the generated models are reliable. Understanding when a model is reliable is critical for mitigating the risks of a blackout because it ensures that decisions are made based on reliable forecasting information.
§ EVALUATION
This research was carried out in collaboration with London Hydro, a local electrical distribution utility for the city of London, Ontario, Canada. The real-world dataset provided by London Hydro was shared through Green Button Connect My Data (CDM), a platform for secured sharing of energy data with the consumer’s consent. Through work like this, London Hydro is preparing for the increased proliferation of EVs and the corresponding increase in electricity demand. London Hydro needs home EV charging data to identify nonwire solutions such as scheduling charging during off-peak hours
when there is solar generation.
In this evaluation, we consider four households with EVs and refer to them as EV1, EV2, EV3, and EV4. The time period ranges for all four households' recordings are very similar, as given in Table <ref>. Note that the time period ranges non-lockdown dataset are the same for all four households. The Weather Station Data was obtained from Environment and Climate Change Canada and consists of two datasets from two weather observation stations in the London area that were merged by averaging their temperature readings, as discussed in Section <ref>.
For comparison of lockdown to non-lockdown data, additional four subsets of the EV household electricity consumption datasets are created by removing the lockdown data following the start of lockdowns on 1 March 2020. For each of the four households,
two individual LSTM-BNN predictor models were trained and tuned. The first model for each household is trained and evaluated on the entire dataset which contains lockdown and non-lockdown electricity consumption data, and the second model for each household is trained and evaluated using only the non-lockdown electricity consumption data.
All experiments were coded in Python with the use of the PyTorch machine learning framework and the Ray Tune library for model training and tuning. This remainder of this section consists of three subsections: first, the results of analysis are discussed, next the hyperparameter search space is defined and training behavior is summarized for model optimization, and finally, the predictive performance is analyzed.
§.§ Statistical Test Results
Two trials are completed for each household, one for the full dataset with lockdown data and the other for the subset without lockdown data. After initial preparations are completed according to the described methodology, the dataset for non-lockdown was split into train and test sets, similar in proportions to those used for the complete dataset. The shift in behavior due to lockdowns was analyzed to determine if there was a statistically significant difference in the distribution of the electricity consumption between train and test sets for lockdown and non-lockdown conditions.
In order to interpret the results of the , the null hypothesis was established. The null hypothesis in this scenario is that there is no statistically significant difference between the training and test set for any of the datasets. For the significance level of 5%, the null hypothesis was rejected for cases where the P-value of the is less than 0.05 (5.00 × 10^-2).
The p-value results of the analysis comparing training and test datasets shown in Table <ref> confirm that the null hypothesis can be rejected for all datasets, as they fall significantly below the threshold value of 5.00 × 10^-2. Therefore, the results indicate that there is a statistically significant difference between the training and test datasets regardless of lockdowns for all households.
§.§ Model Training and Tuning
For each of the datasets outlined in Table <ref>, 80 models were considered using Bayesian optimization within a defined hyperparameter search space. The hyperparameters tuned for the model were batch size, window size, the number of hidden layers, the number of neurons in the hidden layers, the Adam optimizer learning rate, and the dropout probability. The defined search space for each of the hyperparameters is summarized in Table <ref>. Every model that was trained had its performance evaluated using the performance metrics outlined in Section <ref>.
The input and output layers were each set to a fixed size. The number of neurons in the input layer was set by the number of features in the input dataset, and the output layer has a single neuron for the regression prediction output. Different sliding window sizes w were used to provide the model with varying numbers of previous time steps to use as inputs. The window size is an important consideration because using a different number of previous time steps may help the models capture distinct patterns in each household’s electricity consumption. Each model predicts the energy consumption one hour ahead.
The options for dropout probability tuning were based on the tuning range used by Zhang and Mahadevan <cit.> for creating BNN models. A dropout of zero was also included to act as a benchmark for how a non-BNN neural network would perform for each of the households. The hidden layer space and an epoch of 150 were determined by referring to the hyperparameters used for LSTM models for electricity load forecasting used by Kong et al. <cit.>. A sample of the training behavior of the optimal LSTM-BNN for EV3 and the non-lockdown dataset model is shown in Figure <ref>. From this figure, it can be seen that the early stopping could be beneficial in the reduction of computational resources, as a very minor improvement of the validation performance can be observed beyond approximately 80 epochs.
In experiments, the proposed LSTM-BNN interval forecasting is compared to the point forecasting. Both use LSTM as their base model and both undergo exactly the same dataset preparation, preprocessing, and training and tuning steps, as described in Sections and , and , respectively. The difference is that in point forecasting, at inference time, the dropout is not active, and, therefore, point forecasting results in a single precision for each time step. In contrast, the proposed LSTM-BNN generates multiple predictions and forms an interval with the BNN technique.
§.§ Predictive Performance Analysis
The performance results are explored in three parts: first, the overall performance of point and interval prediction models are examined, followed by the analysis of the performance among households. Next, interval forecasts are compared to point forecasts, and lockdown is compared to non-lockdown. Finally, the correlation between Mann-Whitney results and model performance is examined.
§.§.§ Overall Performance
Table <ref> shows the average MAPE values for the four households for each of the two datasets, lockdown and non-lockdown, and for each of the two approaches, point and interval prediction approaches. For interval forecasts, the four performance metrics were calculated using the mean of the generated interval from the set of the forecast generated with the Bayesian technique as described in Section <ref>.
For each household, dataset, and point/interval approach, the model was tuned, and the results from the tuned models were averaged and reported in this table. All MAPE values are significantly higher than those reported in the literature <cit.>, but that is to be expected as EV charging behavior adds remarkable variability and randomness to power consumption pattern compared with office buildings or households without EVs. In general, excluding electricity consumption lockdown data from March 2020 or later did not create an improvement in the predictive performance of the models. While an increase in the error between actual and predicted values is expected between training and testing, there is a much greater increase for the non-lockdown dataset than for the lockdown dataset.
The average interval prediction performance shows that model performance was better overall on the full dataset that included lockdown data. Point predictions for the lockdown were also better than for the non-lockdown for the test dataset. This result is somewhat surprising considering that it would be expected that electricity consumption would be more difficult to predict in a lockdown environment than in a non-lockdown environment when the historical data used for creating models are based mainly on non-lockdown behavior. A possible reason for this is that with the full dataset, the model has more data to learn from.
Figure <ref> shows an example of interval forecast results; specifically actual energy consumption and predicted interval forecasts for EV3 and the lockdown dataset. It can be observed that the prediction interval varies throughout time indicating uncertainties in the forecasted values.
While the focus of this work is the comparison of interval and point predictions, here we further examine interval forecast for the example EV3. Prediction Interval Coverage Probability () measures the fraction of actual values that lay within the prediction interval. In the case of the proposed LSTM-BNN, this will vary depending on how many standard deviations are forming the interval. For the interval of is only 0.28, while increasing to to 0.51, 0.68, 0.82, respectively. Note that similar to MAPE values from Table , these values indicate high errors caused by the randomness of EV charging.
§.§.§ Performance Among Households
The results for the best-performing model for each individual household, for point and interval forecasts, are shown in Tables <ref> and <ref> for lockdown and non-lockdown datasets respectively. Note that models are created for each household individually, and not for a group of homes. The results include all four metrics MAPE, MSE, RMSE, and MAE for all datasets train, validation, and test. The best-performing point and interval prediction models in terms of test set MAPE were for households EV3 and EV4 respectively on the lockdown dataset. Although there are some cases where the models generate better predictions, such as test MAPE of about 31% for EV3 with lockdown dataset and point predictions, most others exhibit higher error. Despite the success of LSTM models in forecasting electricity consumption for offices and households , the results show that in the presence of EV charging their accuracy is greatly reduced. As noted by other studies the variability in household electricity consumption makes creating accurate predictions at this granularity challenging, which seems to only be exacerbated further with the additional consideration of EV charging. This variability results in much higher MAPE, irrelevant of the dataset or the household, than those typically reported in the literature. However, the literature commonly considers offices, schools, or a group of buildings that have much more predictable energy consumption patterns. Moreover, MAPE can produce very high values when the actual values are close to zero. Note that, while MSE, RMSE, and MAE are included in Tables <ref> and <ref>, we are not comparing among households based on those metrics as they are dependent on the scale of the actual values.
To analyze household differences, Figure <ref> shows the MAPE values for point and interval predictions for each household for lockdown and lockdown data while Figure <ref> does the same for the non-lockdown scenario. It can be observed that there are some households that were easier for the predictive models to capture. The EV3 and EV4 household datasets produced the best-performing models but the models trained on the EV1 and EV2 households were much less successful. The predictive performance of the model trained on the EV2 lockdown dataset which produces the highest p-value highlights that there is no direct relationship between results and model performance. The Mann-Whitney results from Table <ref> indicate that in all scenarios, there is a significant difference between the training and test dataset which could be one of the reasons for the high MAPE results.
To analyze this data from a different perspective, Figure shows the MAE values for the four EVs for point and interval prediction in lockdown and non-lockdown periods. Since MAE is scale dependent and consumption scales vary among households, different approaches should be compared for each household individually and not among households. For four scenarios, point forecasting achieves lower error than interval forecasting; however, interval forecasting has the advantage of providing uncertainty information.
§.§.§ Interval Forecasts Compared to Point Forecasts
Figure <ref> compares interval forecasts to point forecasts in terms of MAPE for the lockdown test dataset. In addition to point and interval MAPE for each household, it includes the averages for the four households. It can be observed that the average MAPE for the point forecast is about 5% lower than the average MAPE for interval forecasts. Also, this figure highlights that EV2 was the hardest household to predict for both point and interval approaches. At the same time, EV3 was the most straightforward prediction for point forecasting while EV4 achieved the lowest prediction error for interval forecasting. Figure <ref> shows the same comparison but for non-lockdown test data. Again, MAPE for interval forecasting is higher than for point forecasting but the difference between average MAPE for interval and point forecasts is much larger for the non-lockdown dataset than for the lockdown dataset. Interval forecasting for EV1 performed especially poorly which raised the average MAPE for interval forecasts.
§.§.§ Lockdown Compared to Non-lockdown
Figure <ref> and Figure <ref> compare forecasting with lockdown and non-lockdown for point and interval forecasting respectively. Figure <ref> shows that average point MAPE for non-lockdown data is about 8% higher than for lockdown. All lockdown MAPE values except EV2 were much lower than non-lockdown predictions. Similarly, Figure <ref> indicates that interval prediction for lockdown achieves better results than for non-lockdown except EV2. Overall, accuracy varies greatly among households.
§.§.§ Analysis of Relation between Predictive Performance and Mann-Whitney Results
Figure <ref> and Figure <ref> relate model performance on the test set (MAPE) and P-value between training and test data. These figures further demonstrate that there is no conclusive relationship between the P-value and model performance on the test set. This is due to comparing the datasets statistically while the prediction performance depends on many other factors such as the quality of features and randomness of data. Still, the Mann-Whitney test demonstrated that there is a significant difference between the train and test datasets, regardless of whether lockdown data were included or not, which is a contributing factor to the higher error values observed.
§ CONCLUSIONS
Accurately predicting electricity consumption is an important factor in providing an adequate and reliable energy supply. The electricity demand will continue to grow as society transitions away from using ICE vehicles to EVs that are able to be charged in residential homes. However, predicting energy consumption at the individual household level is more challenging than forecasting for office buildings, schools, or regions due to the high variability in electricity consumption patterns<cit.>. This challenge is further amplified by the need to accommodate EV charging.
This paper proposes LSTM-BNN interval load forecasting for individual households in presence of EV charging based on the LSTM deep learning model and Bayesian inference. The LSTM model incorporates the dropout layer which is active during the inference time and responsible for generating a set of point predictions for a single input sample. Then, the Bayesian technique is employed to create interval forecasts from this set of predictions. The achieved accuracy varies greatly among households due to the variability and randomness of their energy consumption patterns. Examining the performance of point and interval prediction models shows that the LSTM-BNN interval prediction model performs similarly to a standard LSTM point prediction model with the benefit of providing an interval for the prediction. Although the proposed LSTM-BNN is more complex and involves longer training time than the traditional LSTM point forecasting models, LSTM-BNN predictions quantify uncertainly and offer additional information for decision-making. This paper also examined the impact of the COVID-19 lockdown on the load forecasting for these households: results show that the proposed LSTM-BNN achieves similar results for lockdown and non-lockdown periods. We stipulate that the randomness of the EV charging patterns outweighs the impact of change due to the lockdowns.
As demonstrated in our study, EV charging is highly variable and predicting household energy consumption in presence of EV charging is difficult. For use cases such as infrastructure planning, forecasting energy consumption for a neighborhood block may be sufficient. For such scenarios, aggregating energy consumption on the block level would remove some of the randomnesses and improve forecasting accuracy.
Future work will examine the results in terms of the size of the prediction interval to be able to better relate different interval forecasts. Moreover, alternative methods to will be considered to acquire better insight into the potential changes in consumption habits during different periods of time. As energy consumption patterns, including EV charging patterns, change over time resulting in what is known as concept drift, the techniques such as online learning could be integrated with the proposed approach to better capture changes over time.
Conceptualization, R.S., S.M.; methodology, R.S., K.G.; software, R.S.; validation, R.S.; formal analysis, R.S., M.E., K.G.; investigation, R.S., M.E., K.G.; resources, K.G. and S.M.; writing—original draft preparation, R.S.; writing—review and editing, K.G., M.E., S.M.; visualization, R.S., M.E.; supervision, K.G.; project administration, K.G.; funding acquisition, K.G., S.M. All authors have read and agreed to the published version of the manuscript.
This research has been supported by Ontario Centre of Innovation under grant OCI #34674 and by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant ALLRP 570760-21. Computation was enabled in part by the Digital Research Alliance of Canada.
Not applicable.
Not applicable.
Data analyzed in this study are obtained from London Hydro and are protected under a signed non-disclosure agreement. Approvals are needed for sharing this data.
The authors would like to thank London Hydro for supplying industry knowledge and data used in this study.
The authors declare no conflict of interest.
-0cm
References
999
[Zhao and Guo(2015)]1su7054783
Zhao, H.; Guo, S.
External Benefit Evaluation of Renewable Energy Power in China for
Sustainability.
Sustainability 2015, 7, 4783–4805.
[2en()]2energy.gov
Grid Modernization and the Smart Grid.
Available online:
<https://www.energy.gov/oe/grid-modernization-and-smart-grid> (accessed
on 13 August 2022).
[3en()]3energy.gov
Alternative Fuels Data Center: Emissions from Electric Vehicles.
Available online:
<https://afdc.energy.gov/vehicles/electric_emissions.html> (accessed on
13 August 2022).
[Ghunem(2022)]4ghunem_2022
Ghunem, R.
Smarter, Faster and Smaller Power Grids: A Step towards a Green
Economy. 2022.
Available online:
<https://nrc.canada.ca/en/stories/smarter-faster-smaller-power-grids-step-towards-green-economy>
(accessed on 13 August 2022).
[Yamashita et al.(2008)Yamashita, Joo, Li, Zhang, and
Liu]5yamashita2008analysis
Yamashita, K.; Joo, S.K.; Li, J.; Zhang, P.; Liu, C.C.
Analysis, control, and economic impact assessment of major blackout
events.
Eur. Trans. Electr. Power 2008, 18, 854–871.
[Ozcan et al.(2021)Ozcan, Catal, and Kasif]24s21217115
Ozcan, A.; Catal, C.; Kasif, A.
Energy Load Forecasting Using a Dual-Stage Attention-Based Recurrent
Neural Network.
Sensors 2021, 21, 7115.
[Sehovac and Grolinger(2020)]sehovac2020deep
Sehovac, L.; Grolinger, K.
Deep learning for load forecasting: Sequence to sequence recurrent
neural networks with attention.
IEEE Access 2020, 8, 36411–36426.
[Sun et al.(2022)Sun, Qin, Przystupa, Majka, and
Kochan]23sun2022individualized
Sun, L.; Qin, H.; Przystupa, K.; Majka, M.; Kochan, O.
Individualized Short-Term Electric Load Forecasting Using Data-Driven
Meta-Heuristic Method Based on LSTM Network.
Sensors 2022, 22, 7900.
[Jung et al.(2021)Jung, Moon, Park, and Hwang]jung2021attention
Jung, S.; Moon, J.; Park, S.; Hwang, E.
An attention-based multilayer GRU model for multistep-ahead
short-term load forecasting.
Sensors 2021, 21, 1639.
[Al-Ogaili et al.(2019)Al-Ogaili, Hashim, Rahmat, Ramasamy,
Marsadek, Faisal, and Hannan]6al2019review
Al-Ogaili, A.S.; Hashim, T.J.T.; Rahmat, N.A.; Ramasamy, A.K.; Marsadek, M.B.;
Faisal, M.; Hannan, M.A.
Review on scheduling, clustering, and forecasting strategies for
controlling electric vehicle charging: Challenges and recommendations.
IEEE Access 2019, 7, 128353–128371.
[Yu et al.(2019)Yu, Si, Hu, and Zhang]yu2019review
Yu, Y.; Si, X.; Hu, C.; Zhang, J.
A review of recurrent neural networks: LSTM cells and network
architectures.
Neural Comput. 2019, 31, 1235–1270.
[Fekri et al.(2022)Fekri, Grolinger, and
Mir]8fekri2022distributed
Fekri, M.N.; Grolinger, K.; Mir, S.
Distributed load forecasting using smart meter data: Federated
learning with Recurrent Neural Networks.
Int. J. Electr. Power Energy Syst.
2022, 137, 107669.
[Jagait et al.(2021)Jagait, Fekri, Grolinger, and
Mir]jagait2021load
Jagait, R.K.; Fekri, M.N.; Grolinger, K.; Mir, S.
Load forecasting under concept drift: Online ensemble learning with
recurrent neural network and ARIMA.
IEEE Access 2021, 9, 98992–99008.
[Hastie et al.(2009)Hastie, Tibshirani, Friedman, and
Friedman]25hastie2009elements
Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H.
The Elements of Statistical Learning: Data Mining, Inference,
and Prediction; Springer: Berlin/Heidelberg, Germany,
2009.
[Zhang and Mahadevan(2020)]10zhang2020bayesian
Zhang, X.; Mahadevan, S.
Bayesian neural networks for flight trajectory prediction and safety
assessment.
Decis. Support Syst. 2020, 131, 113246.
[Wan et al.(2013)Wan, Xu, Pinson, Dong, and
Wong]wan2013probabilistic
Wan, C.; Xu, Z.; Pinson, P.; Dong, Z.Y.; Wong, K.P.
Probabilistic forecasting of wind power generation using extreme
learning machine.
IEEE Trans. Power Syst. 2013, 29, 1033–1044.
[Kong et al.(2017)Kong, Dong, Jia, Hill, Xu, and
Zhang]9kong2017short
Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y.
Short-term residential load forecasting based on LSTM recurrent
neural network.
IEEE Trans. Smart Grid 2017, 10, 841–851.
[Fekri et al.(2021)Fekri, Patel, Grolinger, and
Sharma]12fekri2021deep
Fekri, M.N.; Patel, H.; Grolinger, K.; Sharma, V.
Deep learning for load forecasting with smart meter data: Online
Adaptive Recurrent Neural Network.
Appl. Energy 2021, 282, 116177.
[Zhang et al.(2018)Zhang, Grolinger, Capretz, and
Seewald]13zhang2018forecasting
Zhang, X.M.; Grolinger, K.; Capretz, M.A.; Seewald, L.
Forecasting residential energy consumption: Single household
perspective.
In Proceedings of the 2018 17th IEEE International Conference on
Machine Learning and Applications, Orlando, FL, USA, 17–20 December
2018; pp. 110–117.
[L’Heureux et al.(2022)L’Heureux, Grolinger, and
Capretz]22l2022transformer
L’Heureux, A.; Grolinger, K.; Capretz, M.A.
Transformer-Based Model for Electrical Load Forecasting.
Energies 2022, 15, 4993.
[Tan et al.(2022)Tan, Hu, Chen, Wang, and Li]tan2022multi
Tan, M.; Hu, C.; Chen, J.; Wang, L.; Li, Z.
Multi-node load forecasting based on multi-task learning with modal
feature extraction.
Eng. Appl. Artif. Intell. 2022,
112, 104856.
[Ribeiro et al.(2022)Ribeiro, do Carmo, Endo, Rosati, and
Lynn]ribeiro2022short
Ribeiro, A.M.N.; do Carmo, P.R.X.; Endo, P.T.; Rosati, P.; Lynn, T.
Short-and very short-term firm-level load forecasting for warehouses:
a comparison of machine learning and deep learning models.
Energies 2022, 15, 750.
[Jiang et al.(2022)Jiang, Gao, Dai, Si, Hao, Zhang, and
Gao]jiang2022very
Jiang, Y.; Gao, T.; Dai, Y.; Si, R.; Hao, J.; Zhang, J.; Gao, D.W.
Very short-term residential load forecasting based on
deep-autoformer.
Appl. Energy 2022, 328, 120120.
[Amini et al.(2016)Amini, Kargarian, and
Karabasoglu]11amini2016arima
Amini, M.H.; Kargarian, A.; Karabasoglu, O.
ARIMA-based decoupled time series forecasting of electric vehicle
charging demand for stochastic power system operation.
Electr. Power Syst. Res. 2016, 140, 378–390.
[Yi et al.(2022)Yi, Liu, Wei, Chen, and Dai]yi2022electric
Yi, Z.; Liu, X.C.; Wei, R.; Chen, X.; Dai, J.
Electric vehicle charging demand forecasting using deep learning
model.
J. Intell. Transp. Syst. 2022, 26, 690–703.
[Koohfar et al.(2023)Koohfar, Woldemariam, and
Kumar]koohfar2023prediction
Koohfar, S.; Woldemariam, W.; Kumar, A.
Prediction of Electric Vehicles Charging Demand: A Transformer-Based
Deep Learning Approach.
Sustainability 2023, 15, 2105.
[Aduama et al.(2023)Aduama, Zhang, and
Al-Sumaiti]aduama2023multi
Aduama, P.; Zhang, Z.; Al-Sumaiti, A.S.
Multi-Feature Data Fusion-Based Load Forecasting of Electric Vehicle
Charging Stations Using a Deep Learning Model.
Energies 2023, 16, 1309.
[Zheng et al.(2020)Zheng, Shao, Zhang, and
Jian]14zheng2020systematic
Zheng, Y.; Shao, Z.; Zhang, Y.; Jian, L.
A systematic methodology for mid-and-long term electric vehicle
charging load forecasting: The case study of Shenzhen, China.
Sustain. Cities Soc. 2020, 56, 102084.
[Arias and Bae(2016)]15ARIAS2016327
Arias, M.B.; Bae, S.
Electric vehicle charging demand forecasting model based on big data
technologies.
Appl. Energy 2016, 183, 327–339.
[Maciejowska et al.(2016)Maciejowska, Nowotarski, and
Weron]16maciejowska2016probabilistic
Maciejowska, K.; Nowotarski, J.; Weron, R.
Probabilistic forecasting of electricity spot prices using Factor
Quantile Regression Averaging.
Int. J. Forecast. 2016, 32, 957–965.
[Shi et al.(2017)Shi, Liang, and Dinavahi]17shi2017direct
Shi, Z.; Liang, H.; Dinavahi, V.
Direct interval forecast of uncertain wind power based on recurrent
neural networks.
IEEE Trans. Sustain. Energy 2017, 9, 1177–1187.
[Kabir et al.(2021)Kabir, Khosravi, Kavousi-Fard, Nahavandi, and
Srinivasan]18KABIR2021106878
Kabir, H.M.D.; Khosravi, A.; Kavousi-Fard, A.; Nahavandi, S.; Srinivasan, D.
Optimal uncertainty-guided neural network training.
Appl. Soft Comput. 2021, 99, 106878.
[Niu and Liang(2018)]19niu2018nuclear
Niu, Z.; Liang, H.
Nuclear mass predictions based on Bayesian neural network approach
with pairing and shell effects.
Phys. Lett. B 2018, 778, 48–53.
[Mirasgedis et al.(2006)Mirasgedis, Sarafidis, Georgopoulou,
Lalas, Moschovits, Karagiannis, and Papakonstantinou]mirasgedis2006models
Mirasgedis, S.; Sarafidis, Y.; Georgopoulou, E.; Lalas, D.; Moschovits, M.;
Karagiannis, F.; Papakonstantinou, D.
Models for mid-term electricity demand forecasting incorporating
weather influences.
Energy 2006, 31, 208–227.
[Falkner et al.(2018)Falkner, Klein, and
Hutter]20pmlr-v80-falkner18a
Falkner, S.; Klein, A.; Hutter, F.
BOHB: Robust and Efficient Hyperparameter Optimization at Scale.
In Proceedings of the 35th International
Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 1437–1446.
[Wang et al.(2019)Wang, Fang, Zhang, Liu, Wei, and Shi]218781349
Wang, X.; Fang, F.; Zhang, X.; Liu, Y.; Wei, L.; Shi, Y.
LSTM-based Short-term Load Forecasting for Building Electricity
Consumption.
In Proceedings of the IEEE 28th International Symposium on Industrial
Electronics, Vancouver, BC, Canada, 12–14 June 2019; pp. 1418–1423.
[Grolinger et al.(2016)Grolinger, L’Heureux, Capretz, and
Seewald]grolinger2016energy
Grolinger, K.; L’Heureux, A.; Capretz, M.A.; Seewald, L.
Energy forecasting for event venues: Big data and prediction
accuracy.
Energy Build. 2016, 112, 222–233.
|
http://arxiv.org/abs/2307.00258v1
|
20230701073937
|
ASASSN-22ak: La Belle au bois dormant in a hydrogen-depleted dwarf nova?
|
[
"Taichi Kato",
"Franz-Josef Hambsch",
"Berto Monard",
"Rod Stubbings"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.HE"
] |
affil:Kyoto
Department of Astronomy, Kyoto University, Sakyo-ku,
Kyoto 606-8502, Japan
[email protected]
affil:GEOS
Groupe Européen d'Observations Stellaires (GEOS),
23 Parc de Levesville, 28300 Bailleau l'Evêque, France
[email protected]
affil:BAV
Bundesdeutsche Arbeitsgemeinschaft für Veränderliche Sterne
(BAV), Munsterdamm 90, 12169 Berlin, Germany
affil:Hambsch
Vereniging Voor Sterrenkunde (VVS), Oostmeers 122 C,
8000 Brugge, Belgium
affil:Monard
Bronberg Observatory, Center for Backyard Astrophysics Pretoria,
PO Box 11426, Tiegerpoort 0056, South Africa
[email protected]
affil:Monard2
Kleinkaroo Observatory, Center for Backyard Astrophysics Kleinkaroo,
Sint Helena 1B, PO Box 281, Calitzdorp 6660, South Africa
affil:Stubbings
Tetoora Observatory, 2643 Warragul-Korumburra Road, Tetoora Road,
Victoria 3821, Australia
[email protected]
abst.inc
§ INTRODUCTION
In the famous fairy tale La belle au bois dormant
(the Beauty in the Sleeping Forest or the Sleeping Beauty),
a princess was cursed by an evil fairy to sleep for a hundred years
before being awakened by a prince <cit.>.
This tale produced one of the world most famous ballets composed
by Pyotr Tchaikovsky <cit.>[
The reference refers to the earliest publication of this work
in the form of a score of Aleksandr Ziloti's arrangement for
solo piano according to Tchaikovsky's letter
(<https://en.tchaikovsky-research.net/pages/The_Sleeping_Beauty>).
The premiere at the Mariinsky Theatre was performed in 1890.
]. The similar things appear to have happened in the world
of dwarf novae. The giant outburst and subsequent superoutbursts
in V3101 Cyg = TCP J21040470+4631129
<cit.> could be a signature
of long “dormant” phase before the initial outburst.
MASTER OT J030227.28+191754.5 <cit.>
might be another such example. Here, we report on an instance
of ASASSN-22ak, which may be the first similar case in
a cataclysmic variable (CV) with an evolved core in the secondary.
§ ASASSN-22AK
ASASSN-22ak was discovered as a dwarf nova by
the All-Sky Automated Survey for
Supernovae (ASAS-SN: <cit.>) at g=15.0 on 2022 January 7.[
<https://www.astronomy.ohio-state.edu/asassn/transients.html>.
] The object further brightened and reached the peak of
g=13.2 on 2022 January 8. The object apparently faded
rapidly after this (there was a 6-d gap in observation in
ASAS-SN). When the object was observed again on 2022 January 16
by Gaia (=Gaia22afw)[
<http://gsaweb.ast.cam.ac.uk/alerts/alert/Gaia22afw/>.
], the object faded to G=15.16.
This outburst was announced in VSNET <cit.> by
Denis Denisenko (vsnet-alert 26518)[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/26518>.
]. According to this, this outburst was also detected by
MASTER-OAFA <cit.> at 13.8 mag on 2022 January 9.
The object underwent another outburst
at 15.4 mag on 2022 July 20 detected by one of the authors (RS)
(vsnet-alert 26875)[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/26875>.
] and 16.2 mag on 2022 December 18 (by RS, vsnet-alert 27223)[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27223>.
]. After these two outbursts, the unusual light curve of
this object received attention (vsnet-alert 27224).[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27224>.
] The ASAS-SN light curve suggested that all outbursts
were superoutbursts. Although the similarity to V3101 Cyg and
the possibility of an AM CVn star, as judged from the short
recurrence time of long outbursts, were discussed,
the nature of the object remained elusive.
One of the authors (BM) obtained a single-night run during
the 2022 January outburst and a possible period of 0.044 d
was suggested (vsnet-alert 27225).[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27225>.
] This period, however, did not comfortably fit what is
expected for a WZ Sge star and the reality of the period
remained to be confirmed. During the 2022 December outburst,
one of the authors (FJH) obtained time-resolved photometry,
which also suggested a period of 0.0412 d (vsnet-alert 27243).[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27243>.
] This suggestion of a period, however, remained unconfirmed
since the object faded soon after these observations
and the amplitudes of the variations were small.
The sudden fading of 1.8 mag (corresponding to more than
2.0 mag d^-1) on 2022 December 29 was sufficient to
convince us that the 0.0412 d, but not its double, is
the true period (vsnet-alert 27258).[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27258>.
]
These outbursts, however, left us important lessons
and we started observations following the detection of
another outburst at 15.2 mag on 2023 April 29 by RS.
FJH obtained time-resolved photometry. The sampling rate,
however, was initially insufficient to detect a period.
After increasing the sampling rate on 2023 May 13,
the detected period was confirmed to be the same as in
the previous outbursts.
The log of observations is summarized in table <ref>.
obslog.inc
table-1
obslog2.inc
§ LONG-TERM BEHAVIOR
The long-term light curve of ASASSN-22ak using the survey
data and visual observations by RS is shown in
figure <ref>. During Gaia observations between
2015 and 2021, the object very slowly faded. This trend
was different from V3101 Cyg before the first outburst
<cit.>.
The four outbursts starting from 2022 January are seen
in the right part of the figure. The quiescent brightness
between these outbursts were brighter than Gaia observations
before the first outburst. The enlarged light curves of
these outbursts are given in figure <ref>.
Near the termination of the third and fourth outbursts,
there were a short (less than 1 d in the third and 2 d
in the fourth) dip and rebrightening. The presence of such
a short dip indicates that the long outbursts were indeed
superoutbursts of a system with a short orbital period,
not long outbursts seen in SS Cyg stars.
We should note that the post-outburst observations after
the 2022 December outburst (third panel in figure <ref>)
were biased brighter since aperture photometry could
measure the object only on limited number of frames.
The true magnitudes should be fainter (see the fourth panel
in figure <ref>, which were observed under more
ideal conditions).
§ SUPERHUMPS
We analyzed the best observed 2023 outburst.
We used locally-weighted polynomial regression (LOWESS: <cit.>)
to remove long-term trends.
The periods were determined using the phase dispersion
minimization (PDM: <cit.>) method, whose errors were
estimated by the methods of <cit.>.
The result before the dip (2023 June 8, BJD 2460103), after
excluding the scattered data on 2023 May 17 (BJD 2460081–2460082)
is shown in figure <ref>. The period obtained
by this analysis is 0.042876(3) d. The variation of the profiles
in 2023 is shown in figure <ref>.
The amplitude of the variations increased on BJD 2460088
(2023 June 23), which corresponded to temporary brightening
from the fading trend (see figure <ref>).
Based on the amplitude variation correlated with the overall
trend similar to SU UMa stars <cit.> and the gradual
shift in the phase of peaks, we identified these variations
to be superhumps, not orbital variations.
An analysis of less observed outburst in 2022 December
during the plateau phase is shown in figure <ref>.
Note that these 7-night observations recorded only
the terminal portion of the outburst and the statistics
were not ideal. The phase plot assumes a period of
0.042876 d, which is allowed as one of the aliases as
seen in the PDM analysis.
§ DISCUSSION
§.§ Comparison with hydrogen-rich WZ Sge stars
As we have seen, there was no evidence of an outburst
in ASASSN-22ak before 2022 (at least for seven years based on
ASAS-SN and Gaia observations). The object suddenly became
active and repeated superoutbursts with cycle lengths of
132–188 d. No very similar object has been known. V3101 Cyg
is somewhat analogous in that it repeated four superoutbursts
(up to the time of the writing) following the 2019 large
outburst. The case of V3101 Cyg is different in that short
rebrightenings were also observed <cit.>.
The initial (2019) outburst of V3101 Cyg showed a relatively
rapidly fading phase, which is the viscous decay phase
characteristic to WZ Sge stars <cit.>.
The initial (2022 January) outburst of ASASSN-22ak had
a similar feature, reaching ∼2 mag brighter than
subsequent outbursts and which apparently faded rapidly.
The second and third outbursts of ASASSN-22ak had similar,
but less distinct, features. The same feature was almost
lacking in the fourth outburst (figure <ref>).
These features suggest that the first outburst of ASASSN-22ak
was a strong WZ Sge-type one and that the second and third
ones were weaker WZ Sge-type ones,
although early superhumps <cit.> were not
directly observed during any of these outbursts.
The superhump period of 0.042876 d should be close to
the orbital period (see also discussions later).
This period is rather too short for a hydrogen-rich CV.
If ASASSN-22ak is a hydrogen-rich CV,
the orbital period should break the record of 0.0462583 d
in OV Boo <cit.>,
which is considered to be a population II CV.
We consider the possibility of ASASSN-22ak being
a population II CV less likely since the transverse velocity
of ASASSN-22ak is 20% of OV Boo <cit.> (but still with
a 28% 1-σ error in the Gaia parallax) and because
of the difference in the light curve (lack of short rebrightenings,
long durations of superoutbursts compared to supercycles)
from the hydrogen-rich V3101 Cyg.
ASASSN-22ak would then be more likely a hydrogen-depleted CV.
There are two possibilities. It could be either an EI Psc
star (CV with an evolved core in the secondary but still with
considerable surface hydrogen) or an AM CVn star in which
the surface hydrogen of the secondary is almost lost.
We consider these possibilities in more detail.
§.§ Comparison with EI Psc stars in general
EI Psc has an orbital period of 0.0445671(2) d <cit.>
very similar to ASASSN-22ak. EI Psc, however, has a hot, luminous
secondary <cit.>, whose quiescent color
(Gaia GP-RP=+0.88) is much redder than in ASASSN-22ak
(GP-RP=+0.16). Another EI Psc-type object V418 Ser
[superhump period 0.04467(1) d] has GP-RP=+0.52 and this object
shows outbursts similar to hydrogen-rich CVs
<cit.>.
The properties of V418 Ser look different from those of ASASSN-22ak.
CRTS J174033.4+414756 (orbital period 0.045048 d) has
GP-RP=+0.43 and the outburst behavior
<cit.>
appears moderately similar to ASASSN-22ak.
CRTS J174033.4+414756 indeed showed a bright WZ Sge-type outburst
in 2023 February after 5-yr quiescence (vsnet-alert 27373).[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27373>.
] Not sufficient time has passed since this outburst and
it is unknown whether CRTS J174033.4+414756 behaves like
ASASSN-22ak. The known differences between CRTS J174033.4+414756
and ASASSN-22ak are that the former shows superhumps with
much larger amplitudes, which suggests a higher mass ratio
[q=0.077(5) was obtained by <cit.>], and
the redder color in quiescence. Although CRTS J174033.4+414756
would be a good candidate for an already known object having
properties similar to ASASSN-22ak, particularly with a bright
superoutburst after 5-yr quiescence, the secondary in ASASSN-22ak
appears to be fainter and less massive.
§.§ Comparison with CRTS J112253.3-111037
The object most similar to ASASSN-22ak appears to be
CRTS J112253.3-111037 <cit.>. This object
has an orbital period 0.04530 d and a very small fractional
superhump excess ϵ≡ P_ SH/P_ orb-1,
where P_ SH and P_ orb represent superhump and
orbital periods, respectively. The secondary in
CRTS J112253.3-111037 was undetected in contrast to other
EI Psc stars. The Gaia color GP-RP=+0.10 is also very
similar to that of ASASSN-22ak.
Although P_ SH was reported in <cit.>, this value
is vital to this discussion and we re-analyzed the data
in <cit.>, in which the modern de-trending method was
not yet employed. The resultant period was 0.045409(9) d
(figure <ref>). This value corresponds to
ϵ=0.0024(2). In the treatment by <cit.>,
old ϵ-q calibrations, which did not consider
the pressure effect, were used and they obtained
an exceptionally small q.
Using the modern calibration in table 4 of <cit.>
considering the pressure effect (but calibrated using
hydrogen-rich systems), this ϵ corresponds to
q=0.043(1) assuming stage B superhumps [for superhump stages,
see <cit.>].
There remains a possibility that the observed superhumps
were stage C ones since observations only recorded the final
part of the outburst. The periods of stage B superhumps
are generally longer by 0.5% than those of stage C superhumps
in hydrogen-rich systems <cit.>. If stage B superhumps
were missed and we only observed stage C superhumps, this
q value would be an underestimate. By artificially increasing
the superhump period by 0.5%, the resultant q becomes 0.058(1),
which should be regarded as the upper limit.
In actual WZ Sge stars, stage C tends to be missing
<cit.>, and we consider that the first value
[q=0.043(1)] is expected to be closer to the real one.
CRTS J112253.3-111037 is also similar to ASASSN-22ak
in terms of the low frequency of outbursts <cit.>.
There was no information how the 2010 outburst in
CRTS J112253.3-111037 started due to an ∼50 d
observational gap in the CRTS data <cit.> and
it is unknown whether CRTS J112253.3-111037 showed
a sharp peak or a viscous decay phase.
No repeated superoutbursts like ASASSN-22ak, however, appear to
have been present since then.
It might be interesting to note that ATLAS and ASAS-SN data
show that CRTS J112253.3-111037 showed brightening with
a broad peak reaching g=17.8 around 2022 June 6 (BJD 2459737).
The entire event lasted ∼15 d and this may be similar
to the enhanced quiescent activity in the AM CVn star
NSV 1440 <cit.>, possibly signifying
the similarity to AM CVn stars.
The small amplitude of superhumps (0.05 mag) in
CRTS J112253.3-111037 is also similar to ASASSN-22ak (0.05 mag),
implying a similarly low q in ASASSN-22ak.
<cit.> suggested a possibility that
CRTS J112253.3-111037 had already evolved past its period
minimum based on <cit.> and that its secondary
can be semidegenerate. Although this conclusion was apparently
partly based on q smaller than the one obtained in the present paper,
we agree that both ASASSN-22ak and CRTS J112253.3-111037
are evolving close to AM CVn stars since the properties of
these objects are very different from other EI Psc objects
with similar orbital periods (subsection <ref>).
ASASSN-22ak may have already lost hydrogen and it may even be
an AM CVn star. If this is the case, ASASSN-22ak breaks
the longest record of orbital periods in AM CVn stars showing
a genuine superoutburst [see also the discussion in
<cit.>; superhump period of 0.0404–0.0415 d
in ASASSN-21au = ZTF20acyxwzf
(vsnet-alert 25369;[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/25369>.
] <cit.>)].
Another AM CVn star with a long orbital period
[PNV J06245297+0208207 in 2023 <cit.>:
superhump period 0.035185(8) d
(vsnet-alert 27353[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27353>.
])]
showed a superoutburst very similar to ASASSN-21au.
This morphology of superoutbursts appears to be common
to AM CVn stars with long orbital periods and the possibility
of ASASSN-22ak as being an AM CVn star might be less likely.
We leave this question open since the outburst properties
were so unusual in ASASSN-22ak.
In any case, spectroscopy of ASASSN-22ak to determine
the hydrogen and helium content and to determine
the orbital period is very much desirable.
The addition of ASASSN-22ak seems to strengthen the idea
that cataclysmic variables could be the dominant
progenitors of AM CVn binaries
<cit.>.
The EI Psc-type objects treated in this paper for comparisons
with ASASSN-22ak are summarized in table <ref>.
The mean superhump amplitude for EI Psc was obtained from
the data in <cit.>.
The superhump amplitudes for V418 Ser and CRTS J174033.4+414756
were from <cit.> and <cit.>, respectively.
Although CRTS J174033.4+414756 showed the initial phase of large
superhump amplitudes <cit.>, no such a phase was
recorded in ASASSN-22ak. The q values from ϵ assuming stage B
were obtained by the method in <cit.>.
§.§ Pre-outburst dormancy and repeated superoutbursts
Repeated long superoutbursts with short recurrence times
is the unique feature of ASASSN-22ak. In the case of
(hydrogen-rich) V3101 Cyg, some of post-superoutburst
rebrightenings may have been caused by the matter in the disk
left after the main superoutburst <cit.>.
Repeated superoutbursts appear to be more easily explained
if the mass-transfer rate increased after the initial outburst
<cit.>. This increase in the mass transfer
may either have been caused by irradiation of the secondary
by the initial outburst <cit.>,
or it could have been that the quiescent viscosity of the disk
before the initial outburst was simply extremely low to accumulate
a large amount of mass in the disk and that the mass-transfer
rate and the quiescent viscosity is simply returning to
the normal value of this object after the initial outburst.
In the case of ASASSN-22ak, the initial outburst was
not as strong as in V3101 Cyg, although the peak was bright,
and the mechanism may be different from the case of
V3101 Cyg. In ASASSN-22ak, q would be smaller than in
V3101 Cyg (as inferred from the smaller amplitude of
superhumps and from the analogy with CRTS J112253.3-111037)
and the weaker tidal effect would make it more difficult to
maintain superoutbursts in contrast to V3101 Cyg.
Although there have been a suggestion that smaller q
can lead to premature quenching of superoutbursts
<cit.>, there is no established
theory when superoutbursts end. Although this premature
quenching of superoutbursts might explain the repeated
superoutbursts with relatively short intervals, the lack of
post-superoutburst rebrightenings in ASASSN-22ak might be
problematic. It may be that the hydrogen
depletion in the disk of ASASSN-22ak is not as strong as
AM CVn stars and long superoutbursts are easier to
maintain than in almost pure helium disks. A combination
of effects of all these circumstances, unusual for
ordinary CVs, should be a challenging target for
theorists working with the disk-instability model.
The pre-outburst dormancy might be easier
to explain in ASASSN-22ak. In contrast to V3101 Cyg,
which is expected to have a fully convective secondary,
ASASSN-22ak has an evolved core and a magnetic dynamo can still work
<cit.> and is probably necessary
to form the observed AM CVn stars within reasonable time.
With such a dynamo, the instantaneous mass-transfer rate can
be different from the secular average, as seen in the spread of
absolute magnitudes in CVs above the period gap <cit.>
and the presence of VY Scl stars.
There is also a possibility that the quiescent viscosity
of the disk before the initial outburst was simply very low and
the viscosity increased after the outburst as proposed by
<cit.> for hydrogen-rich WZ Sge stars.
This explanation, however, might face a difficulty
to realize a very quiet, low-viscosity disk when the secondary
has a seed magnetic field, which may increase
the quiescent viscosity of the disk via the magneto-rotational
instability (cf. <cit.>;
but see also <cit.>).
High and low states in polars (AM Her stars: <cit.>)
may provide additional insight.
EF Eri has a brown-dwarf secondary
<cit.> and a strong
magnetic activity cycle as in CVs above the period gap
is not expected. This object showed (and still showing)
a long-lasting high state (just like “awakening”)
starting from 2022 December (vsnet-alert 27205).[
<http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/27205>.
] Since polars do not have an accretion disk, storage of mass
in the disk before the active (high) state, as in WZ Sge stars,
is impossible. There could be a reservoir of additional angular
momentum other than the disk, and this might also explain
the dormany/waking-up phenomena in dwarf novae.
§ ACKNOWLEDGEMENTS
This work was supported by JSPS KAKENHI Grant Number 21K03616.
The authors are grateful to the ASAS-SN, ATLAS and Gaia teams
for making their data available to the public.
We are also grateful to Naoto Kojiguchi for helping downloading
the ZTF and Gaia data and Yusuke Tampo, Junpei Ito and
Katsuki Muraoka for converting the data reported to
the VSNET Collaboration.
This work has made use of data from the Asteroid Terrestrial-impact
Last Alert System (ATLAS) project.
The ATLAS project is primarily funded to search for
near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284,
and 80NSSC18K1575; byproducts of the NEO search include images and
catalogs from the survey area. This work was partially funded by
Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC
grants ST/T000198/1 and ST/S006109/1. The ATLAS science products
have been made possible through the contributions of the University
of Hawaii Institute for Astronomy, the Queen's University Belfast,
the Space Telescope Science Institute, the South African Astronomical
Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
We acknowledge ESA Gaia, DPAC and the Photometric Science Alerts Team
(http://gsaweb.ast.cam.ac.uk/
alerts).
§ LIST OF OBJECTS IN THIS PAPER
objlist.inc
§ REFERENCES
We provide two forms of the references section (for ADS
and as published) so that the references can be easily
incorporated into ADS.
asn22akaph.bbl
asn22ak.bbl.vsolj
|
http://arxiv.org/abs/2306.03171v1
|
20230605182638
|
Note on quantum cellular automata and strong equivalence
|
[
"Carolyn Zhang"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.str-el",
"math-ph",
"math.MP"
] |
theoremTheoremclaimClaimlemmaLemmacorollaryCorollarydefinitionDefinition
Department of Physics, Kadanoff Center for Theoretical Physics, University of Chicago, Chicago, Illinois 60637, USA
In this note, we present some results on the classification of quantum cellular automata (QCA) in 1D under strong equivalence rather than stable equivalence. Under strong equivalence, we only allow adding ancillas carrying the original on-site representation of the symmetry, while under stable equivalence, we allow adding ancillas carrying any representation of the symmetry. The former may be more realistic, because in physical systems especially in AMO/quantum computing contexts, we would not expect additional spins carrying arbitrary representations of the symmetry to be present. Ref. mpu proposed two kinds of symmetry-protected indices (SPIs) for QCA with discrete symmetries under strong equivalence. In this note, we show that the more refined of these SPIs still only has a one-to-one correspondence to equivalence classes of ℤ_N symmetric QCA when N is prime. We show a counter-example for N=4. We show that QCA with ℤ_2 symmetry under strong equivalence, for a given on-site representation, are classified by ℤ^pq where p is the number of prime factors of the on-site Hilbert space dimension and q is the number of prime factors of the trace of the nontrivial on-site ℤ_2 element. Finally, we show that the GNVW index has a formulation in terms of a ℤ_2 SPI in a doubled system, and we provide a direct connection between the SPI formulation of the GNVW index and a second Renyi version of the mutual information formula for the GNVW index.
Note on quantum cellular automata and strong equivalence
Carolyn Zhang
July 31, 2023
========================================================
§ INTRODUCTION
Some unitary operators that preserve locality, mapping local operators to nearby (strictly local) operators, cannot be written as finite-depth quantum circuit (FDQC). Such strict locality-preserving unitaries, also known as quantum cellular automata (QCA), are peculiar because their actions cannot be spatially restricted. QCA have been studied and classified in low dimensions<cit.>. In 1D, a simple example of a nontrivial QCA is a translation operator. A translation operator preserves locality, mapping an operator O_i on site i to O_i+1 on site i+1. However, it cannot be generated by any local Hamiltonian, and therefore cannot be restricted. In fact, all nontrivial QCA in 1D are generalized translations, and can be classified using a “GNVW index"<cit.> that takes value in the rational numbers<cit.>. In the presence of U(1) symmetry, 1D QCA are completely classified by a ratio of two functions π(z)=a(z)/b(z)<cit.>. For discrete symmetries, the classification depends on the definition of equivalence. As we will discuss in more detail below, stable equivalence leads to a classification given by H^2(G,U(1)) (in addition to translations). On the other hand, strong equivalence leads to a richer classification. Ref. mpu studied this classification and suggested a set of symmetry protected indices (SPIs), one for each symmetry group element. Roughly speaking, the difference between stable equivalence and strict equivalence is that for the former, we allow tensoring with ancillas carrying any finite dimensional representation of the symmetry, while for the latter we only allow tensoring with ancillas carrying the original on-site representation of the symmetry. Ref. mpu also provided a way to measure these SPIs in interferometric experiments.
For QCA with U(1) symmetry, the physical meaning of the charge part of π(z), given by π̃(z), is clear: it corresponds to the net U(1) charge transported in the positive direction upon application of the QCA. The index can be therefore detected by measuring charge transport<cit.>. The GNVW index has been interpreted as net transport of quantum information<cit.>, but it is less obvious how to measure this transport. Ref. tracking (see also Ref. ranard2022) showed that this index can be measured using quantum mutual information if we consider a setting where a subset of the spins in the spin chains are initially maximally entangled with a partner ancilla. Specifically, the mutual information quantity measures the change in entanglement across a cut, due to the entanglement between the spins and the ancillas. While this gives some physical intuition for the GNVW index, their mutual information formula was not directly derived from the original GNVW index formula.
In this note, we present the following collection of results related to the classification of QCA with discrete symmetries, under strong equivalence:
* A proof that the refined SPI of Ref. mpu completely classifies ℤ_N symmetric QCA under strong equivalence when N is prime, and a counterexample for N=4.
* A derivation that ℤ_2 symmetric QCA are classified by ℤ^pq where p,q are the number of prime factors of the on-site Hilbert space dimension and trace of the nontrivial on-site ℤ_2 element respectively.
* A connection between the GNVW index and the ℤ_2 SPI of a doubled system.
* A direct connection betwen the SPI formulation of the GNVW index and a second Renyi version of the mutual information formula in Ref. tracking for the GNVW index, when the ancillas are initially maximally entangled with the edge spins. Note that there is an alternate derivation of this result using tensor network methods in the Supplemental material of Ref. gong2021chaos.
§ SETTING AND DEFINITIONS
We consider a periodic chain of bosonic d-state spins in 1D evolving under a QCA U, that respects a global unitary symmetry G. For simplicity, we will only consider G=ℤ_N in this work. In this context, we will proceed with some definitions that will be useful for specifying the problems we would like to solve.
§.§ QCA and strong equivalence
We define a QCA as a unitary operator that takes any operator fully supported on site i to an operator supported strictly on [i-ξ,i+ξ], where ξ is a bounded operator spreading length. We define two 1D QCA U and U' as equivalent if they differ by a 1D FDQC W:
U'=W· U.
Without loss of generality (for 1D), we can assume that W consists of two layers of local unitaries, where in each layer the unitaries are all supported on disjoint, bounded intervals. We say that U is G symmetric if it commutes with the global symmetry operators U_g for all g∈ G:
[U,U_g]=0 ∀ g∈ G.
Here, U_g=∏_i∈Λμ_g,i is the unitary representation of G for the entire system and μ_g,i is the on-site representation of G. We assume that the on-site representation is identical on each site. In the presence of symmetry, U'∼ U if they differ by a G symmetric FDQC, which is an FDQC built out of gates that each individually commute with U_g for all g∈ G.
Equivalence relations in the presence of symmetries are usually defined using stable equivalence, which allows for the addition of ancillas carrying arbitrary representations of G to be added to each site. Ancillas are degrees of freedom that can transform nontrivially under the symmetry but evolve trivially under U, so the entire system is described by U_tot=U⊗1_a, where 1_a acts on the ancillas. Specifically, under stable equivalence, U∼ U' if and only if U⊗1_a=W· (U'⊗1_a') where W is a G symmetric FDQC and 1_a and 1_a' are act on two different sets of ancillas. Under stable equivalence, 1D QCA with symmetry have been classified by cohomology<cit.>. The classification is given by the GNVW index<cit.> along with an element of H^2(G,U(1)).
We may also study the case where we only allow ancillas with the same on-site representation of G as the original sites. As mentioned earlier, this representation is given by μ_g. This definition of equivalence may be considered more natural because in a real physical system, one would not expect additional spins with arbitrary representations of G to be present. In Ref. mpu, this definition of equivalence was called strong equivalence. This is the definition of equivalence we will use in the rest of this note, unless otherwise specified.
For systems without symmetry and systems with U(1) symmetry, we do not expect there to be a difference between the stable equivalence classification and the strong equivalence classification. However, for systems with discrete symmetries, aspects of the strong equivalence classification are still not understood.
§.§ GNVW index
In the absence of symmetry, QCA in 1D are classified by the GNVW index<cit.>. Here, we will review the GNVW index because it will come up frequently later in this note.
The GNVW index measures net translation of operators in the spin chain. Specifically, to each site in the spin chain, we can associate an operator algebra, which is the algebra of operators fully supported on that site. Using the tensor product structure of the Hilbert space, we can then obtain the operator algebra for an interval A of sites. We denote this algebra by 𝒜. We define an overlap function η(𝒜,ℬ) of two operator algebras 𝒜 and ℬ as
η(𝒜,ℬ)=√(∑_O_a∈𝒜
O_b∈ℬ|tr(O_a^† O_b)|^2),
where the sum runs over an orthonormal basis of operators in 𝒜, ℬ. In (<ref>), the lowercase symbol “tr” denotes a normalized trace defined by tr(1) = 1. η(𝒜,ℬ) simply counts the number of operators contained in both 𝒜 and ℬ. It is easy to see that η(𝒜,𝒜)=d^|A| and η(𝒜,ℬ)=1 if A∩ B=∅. Using this overlap function, the GNVW index is given by<cit.>[The index is sometimes defined as the logarithm of (<ref>).]
ind(U)=η(U^†𝒜U,ℬ)/η(𝒜,U^†ℬU),
where 𝒜, ℬ are operator algebras of two adjacent intervals A and B. The only constraint on A and B is that they must be at least twice the operator spreading length of U. The intuition behind ind(U) is that it gives the change in the overlap of the operator algebras 𝒜 and ℬ due to the net chiral action of U. For example, if U is a translation by a single lattice site, then ind(U)=d.
§.§ Symmetry-protected indices
To classify G symmetric QCA, Ref. mpu proposed using a set of symmetry-protected indices (SPIs), consisting of an index for each group element. The SPI for a group element g can be nontrivial if Tr(μ_g)≠ 0, where μ_g is the on-site representation of the symmetry. To define the SPIs, we use the action of U on the global symmetry operator U_g restricted to an interval I with an even number of sites. Because U_g is a product of on-site operators, this restriction is unambiguous. We denote the restriction of U_g to I by
U_g,I=U_g,L⊗ U_g,R,
where U_g,L and U_g,R are supported on the left and right halves of I respectively. The action of U on U_g,I is given by
U^† U_g,IU=L_g⊗ R_g,
where R_g and L_g are supported to the right and the left of the midpoint of I respectively. Although U_g,I∼ L_g⊗ R_g as representations (meaning their characters match on all group elements), U_g,R and R_g may not be equivalent as representations. Notice that there is an important ambiguity in defining R_g and L_g, because R_g can carry a 1D representation of G that is canceled by L_g. We prove in Appendix <ref> the following important theorem:
Two G symmetric QCA U and U' are equivalent if and only if their corresponding representations R_g and R_g' as defined in (<ref>) differ only by a 1D representation u_g of G and tensoring with on-site representations μ_g.
In particular, R_g=u_gμ_g^⊗ N for some nonnegative integer N if and only if U is a G symmetric FDQC.
Note that this result was derived using a matrix product state formalism in Ref. mpu. However, the problem of finding concrete topological invariants for U remains unsolved. Theorem <ref> indicates that we should seek quantities that are insensitive to R_g→ u_gR_g where u_g is a 1D representation of G. The symmetry-protected indices (SPIs), defined in Ref. mpu were designed to be insensitive to u_g. These SPIs, denoted by {ind_g}, are given by[Ref. mpu uses the logarithm of the indices presented here, which are additive rather than multiplicative under stacking.]
ind_g(U)=ind(U)·√(|Tr(R_g)/Tr(L_g)|),
where ind(U) is the GNVW index of U and the trace in the numerator and denominator are all taken over the same space dimension (so Tr(R_0)=Tr(L_0), where 0 is the trivial group element). Note that ind_0(U)=ind(U).
While {ind_g(U)} are certainly invariant for QCA that are equivalent, they are not complete. In other words, they can match for two QCA that do not differ by a G symmetric FDQC. A simple example given in the supplemental material of Ref. mpu uses G=ℤ_3 and an on-site representation μ_g=diag(1,e^2π i/3,e^2π i/3). In this case, we can have R_g=L_g=diag(1,e^4π i/3,e^4π i/3) because L_g⊗ R_g=μ_g⊗μ_g. Clearly, ind_g=1 for all g∈ G, even though R_g is not equivalent to U_g,R=μ_g.
§.§ Refined symmetry-protected indices
To remedy the fact that {ind_g(U)} are not complete, Ref mpu also proposed a set of refined SPIs (see Eq. 89 in the supplemental material). These refined indices are defined as
rind_g(U)=ind(U)·[Tr(R_g)/Tr(U_g,R)]^d_g,
where d_g is the order of the group element g∈ G. Again, the trace in the numerator and that in the denominator are taken over the same Hilbert space, so rind_1(U)=ind(U). The purpose of taking the d_g power is to remove the ambiguity of the 1D representation u_g. For the ℤ_3 example above, it is easy to see that rind_g=-1≠ 1 for each of the two nontrivial elements of ℤ_3.
§ LIMITATIONS OF REFINED SYMMETRY-PROTECTED INDICES
With the setting and previous results established, we will now introduce some new results. First, we will show that the refined SPIs {rind_g(U)} only completely classify ℤ_N symmetric QCA when N is prime, and give a counterexample for N=4.
In order for the refined SPIs to completely classify ℤ_N symmetric QCA, it must be true that
Tr(R_g)^d_g=Tr(R_g')^d_g iff R_g=u_gR_g',
where R_g and R_g' are representations of the symmetry and u_g is a 1D representation of the symmetry. The “if" direction follows immediately. For the “only if" direction, let us write
R_g=⊕_ra_ru_r,g R_g'=⊕_rb_ru_r,g.
Here, a_r is the number of 1D irreducible representations e^2π ir/N of ℤ_N in R_g, and u_r,g=(u_r)^g. Let us assume that Tr(R_g)^d_g=Tr(R_g')^d_g. For (<ref>) to be true, we must have a_r=b_r-r' where r' depends on the 1D representation u_g.
We will now evaluate Tr(R_0)^1 and Tr(R_1)^N and compare them to Tr(R_0')^1 and Tr(R_1')^N. Here, R_1 is the generator of ℤ_N. Setting Tr(R_0)^1=Tr(R_0')^1, we get
∑_ra_r=∑_rb_r.
Next, setting Tr(R_1)^N=Tr(R_1')^N, we get
∑_ra_ru_r,1=v_1∑_rb_ru_r,1,
where v_1^N=1. Suppose that v_1=u_r',1. Then v_1 shifts u_r,1→ u_r+r',1, so
∑_ru_r,1(a_r-b_r-r')=0.
For prime N, only the sum of all the different 1D reps give zero (i.e. ∑_ru_r,1=0), so a_r-b_r-r' must be the same nonnegative integer n∈ℕ for all r. But if a_r-b_r-r'=n for all r and n≠ 0, then (<ref>) is not satisfied. So a_r=b_r-r' for all r.
This argument fails for N not prime. For example if we take N=4, there are two ways to get 0, either by having equal numbers of 1 and -1 or equal numbers of i and -i. One can then construct representations for which Tr(R_g)^d_g=Tr(R_g')^d_g for all group elements g, but R_1≠ u_1R_1'.
One example for N=4 is R_1=diag(1,1,-1,i,-i) and R_1'=diag(1,i,i,-i,-i) (here we added an extra trivial rep so that the character is not zero for any group element). One can check that the characters for g=(0,1,2,3) are (5,1,3,1) for R_g and (5,1,-3,1) for R_g', so they are different representations, differing by more than just a 1D rep u_g. However, their characters to the d_gth power are equal for all g.
In summary, we find that even the refined indices are not fine enough to completely classify QCA with ℤ_N symmetry under strong equivalence. They only completely classify ℤ_N symmetric QCA when N is prime.
§ CLASSIFICATION AND INVARIANTS OF ℤ_2-SYMMETRIC QCA
Despite the known abstract statement of the classification given by Theorem <ref>, concrete topological invariants for G symmetric QCA are difficult to find, as is clear from the discussion above. In this section, we will study the simple case where G=ℤ_2, for which we can remove the ambiguity of the 1D representation u_g by simply taking an absolute value. We then completely classify QCA with ℤ_2 symmetry, by finding all allowed values for the ℤ_2 SPI, given an on-site representation μ_g of ℤ_2. We will then show that (1) the GNVW index can be interpreted as a ℤ_2 SPI in a doubled system and (2) the ℤ_2 SPI links the original GNVW index and a formulation of it in terms of mutual information, given in Ref. tracking,ranard2022.
§.§ Classification of ℤ_2 symmetric QCA
In this section, we work out the complete classification of 1D QCA with ℤ_2 symmetry under strong equivalence, given an on-site representation μ_g of the ℤ_2 symmetry.
The only irreducible representations of ℤ_2 is the trivial representation (1) and the sign representation (-1). Therefore, we can remove the ambiguity of the 1D representation of G attached to R_1, where g=1 is the generator of ℤ_2, by taking the absolute value of Tr(R_1). We define
π_0(U)=ind(U) π_1(U)=π_0(U)|Tr(R_1)/Tr(U_1,R)|,
where, as always, the trace in the numerator and denominator are over the same Hilbert space. We will refer to π_1(U) as the ℤ_2 SPI from this point onward.
Suppose that the spectrum of μ_1 has the eigenvalue +1 with multiplicity a_0 and the eigenvalue -1 with multiplicity a_1. Then the representation of G given by μ_g is completely defined by
Tr(μ_0) =d=a_0+a_1
Tr(μ_1) =χ=a_0-a_1.
We now claim that the complete classificaton of QCA with ℤ_2 symmetry is given by all the vectors (π_0(U),π_1(U)) with components π_0(U) and π_1(U) satisfying
d^N_1π_0(U) =α_0+α_1
d^N_2/π_0(U) =β_0+β_1
χ^N_1π_1(U) =α_0-α_1
χ^N_2/π_1(U) =β_0-β_1,
where N_1,N_2,α_0,α_1,β_0, and β_1 are all nonnegative integers. We will first provide the classification obtained from (<ref>), and then we will justify (<ref>).
§.§.§ Classification result
If χ=0, then (<ref>) says that
π_1(U)=α_0-α_1/χ^N_1=χ^N_2/β_0-β_1.
Therefore, for π_1(U) to be defined, it must be 1. In this case, all that remains is the no-symmetry GNVW classification, in agreement with Ref. mpu. This classification is simply given by ℤ^p where p is the number of prime factors of d. Notice that π_1(U)=1 means that α_0=α_1 and β_0=β_1 so α_0+α_1 and β_0+β_1 is always even. However, since d is also even for G=ℤ_2, d^N_1π_0(U) and d^N_2/π_0(U) in (<ref>) are also even for sufficiently large N_1 and N_2, even if π_0(U) is odd.
Suppose that χ≠ 0, giving extra constraints on the allowed π_1(U). For simplicity first consider the case where π_0(U)=1 so these phases are only nontrivial in the presence of ℤ_2 symmetry. Since H^2(ℤ_2,U(1))=ℤ_1, the phases studied here are in fact only nontrivial under strong equivalence. Eq. <ref> simplifies to
α_0 =χ^N_1π_1(U)+d^N_1/2 α_1=d^N_1-α_0
β_0 =χ^N_2/π_1(U)+d^N_2/2 β_1=d^N_2-β_0.
It is not hard to see that for any π_1(U) with the same prime factors as χ, one can always find sufficiently large N_1 and N_2 that gives nonnegative integer values for α_0,α_1,β_0, and β_1. In particular, notice that if d is odd, then χ and π_1(U) must also be odd, so χ^N_1π_1(U)+d^N_1 and χ^N_2/π_1(U)+d^N_2 are both divisible by two. On the other hand if d is even, then χ must also be even, so χ^N_1π_1(U)+d^N_1 and χ^N_2/π_1(U)+d^N_2 is also divisible by two for sufficiently large N_1,N_2. The enrichment of the classification due to ℤ_2 symmetry is therefore ℤ^q where q is the number of prime factors of χ. Combining this with the classification in the absence of ℤ_2 symmetry, we obtain a total classification of ℤ^pq.
§.§.§ Justification of (<ref>)
We now show that Eq. <ref> correspond to all the distinct equivalence classes, labeled by (π_0(U),π_1(U)), realizable in the system. In other words, we will prove (1) if U is a ℤ_2 symmetric QCA, then (<ref>) are satisfied and (2) if (<ref>) are satisfied, then we can construct a corresponding ℤ_2 symmetric QCA.
For (1), note that if U is a ℤ_2 symmetric QCA, then
U^† U_g U=U_g
This means that
Spec(U_g,LU_g,R)=Spec(L_gR_g).
Let |I|=N_1+N_2. Then (<ref>) means that
d^N_1+N_2 =(α_0+α_1)(β_0+β_1)
χ^N_1+N_2 =(α_0-α_1)(β_0-β_1).
Identifying α_0 and β_0 with the number of +1 eigenvalues of L_g and R_g respectively, and α_1 and β_1 with the number of -1 eigenvalues of L_g and R_g, we see that (<ref>) follows from (<ref>), with nonnegative α_0,α_1,β_0, and β_1. Therefore, (<ref>) implies (<ref>).
For (2), we will construct a ℤ_2 symmetric QCA given α_0,α_1,β_0, and β_1. To begin, we cluster groups of 2N_1+N_2 sites into supersites, with Hilbert space
ℋ^2N_1+N_2=ℋ^N_1⊗ℋ_α⊗ℋ_β
where the representation of the ℤ_2 symmetry is given by μ_g^N_1 on ℋ^N_1, and L_g and R_g on ℋ_α and ℋ_β respectively. Now we consider a QCA that performs a unit supersite translation on the ℋ_α sites, and a unit supersite translation on the ℋ^N_1 sites in the opposite direction, and does nothing to the ℋ_β sites. This would give
π_0(U)=α_0+α_1/d^N_1 π_1(U)=α_0-α_1/χ^N_1,
as desired.
§.§ The GNVW index in terms of a ℤ_2 SPI
In this section, we show that the GNVW index ind(U) can be interpreted in terms of a ℤ_2 SPI for the ℤ_2 SWAP symmetry of U⊗ U acting in a doubled system. We begin with a useful formulation of the overlap function η(𝒜,ℬ) derived in Ref. flows. There, it was shown that in a doubled system with 2|Λ| total sites, we can write the overlap function in terms of a SWAP operator between the two spin chains. By SWAP operator, we simply mean an operator with the following action:
SWAP_i,j^†(O_i⊗ O_j')SWAP_i,j=(O_i'⊗ O_j),
where O_i is an operator supported on site i and O_j is the same operator translated to site j. Similarly, O_j' is supported on site j and O_i' is the same operator, but translated to site i. Any SWAP operator is unitary and order two, and hence is also Hermitian. Denoting the two spin chains by Λ_1 and Λ_2, with A_1,B_1⊂Λ_1 and A_2,B_2⊂Λ_2, we have<cit.>
η (𝒜,ℬ)=d^(N_A+N_B)/2/d^N
×√(Tr(SWAP_A_1,A_2SWAP_B_1,B_2)).
To obtain η(U^†𝒜U,ℬ), we define an operator U_1,2=U⊗ U that acts as the QCA U on both Λ_1 and Λ_2:
η (U^†𝒜U,ℬ)=d^(N_A+N_B)/2/d^N
×√(Tr(U_1,2SWAP_A_1,A_2U_1,2^†SWAP_B_1,B_2)).
Since SWAP_A,B^2=1 for any A and B, it can be considered the generator of a ℤ_2 symmetry. SWAP_Λ_1,Λ_2, which exchanges the two spin chains Λ_1 and Λ_2, is a ℤ_2 symmetry of U_1,2 because it commutes with U_1,2:
U_1,2^†SWAP_Λ_1,Λ_2U_1,2=SWAP_Λ_1,Λ_2.
(<ref>) and the fact that U_1,2 is locality preserving means that it must act on SWAP_A_1,A_2 over some interval A=A_1∪ A_2 as
U_1,2^† (SWAP_A_1L,A_2L⊗SWAP_A_1R,A_2R)U_1,2
=Y_AL(SWAP_A_1L,A_2L⊗SWAP_A_1R,A_2R)Y_AR,
where Y_AL and Y_AR are local operators on the left and right ends of A, as shown in Fig. <ref>. The L and R subscripts indicate the left and right halves of the interval A. Writing the GNVW index given in (<ref>) using (<ref>), and then using (<ref>), we obtain
ind(U) =√(Tr[Y_AL(SWAP_A_1L,A_2L⊗SWAP_A_1R,A_2R)Y_AR(SWAP_B_1L,B_2L⊗SWAP_B_1R,B_2R)]/Tr[(SWAP_A_1L,A_2L⊗SWAP_A_1R,A_2R)Y_BL(SWAP_B_1L,B_2L⊗SWAP_B_1R,B_2R)Y_BR])
=√(Tr(Y_ALSWAP_A_1L,A_2L)Tr(SWAP_A_1R,A_2RY_ARSWAP_B_1L,B_2L)Tr(SWAP_B_1R,B_2R)/Tr(SWAP_A_1L,A_2L)Tr(SWAP_A_1R,A_2RY_BLSWAP_B_1L,B_2L)Tr(SWAP_B_1R,B_2RY_BR)),
where the trace is taken over the same space in the numerator and the denominator. In the second line, we split the trace over tensor products into products of traces.
Let us first use some identities to simplify the middle terms of the numerator and denominator in (<ref>). Notice that, since U_1,2^† (SWAP_A_1,A_2⊗SWAP_B_1,B_2)U_1,2=Y_AL(SWAP_A_1,A_2⊗SWAP_B_1,B_2)Y_BR, the following identity holds:
Y_AR=Y_BL^†.
This means that
Tr(SWAP_A_1R,A_2RY_ARSWAP_B_1L,B_2L)
=Tr(SWAP_A_1R,A_2RY_BL^†SWAP_B_1L,B_2L).
Next, using the fact that SWAP_A_1R,A_2R and SWAP_B_1L,B_2L are Hermitian and mutually commute, we obtain
Tr(SWAP_A_1R,A_2RY_ARSWAP_B_1L,B_2L)
=Tr(SWAP_B_1L,B_2L^† Y_BL^†SWAP_A_1R,A_2R^†)
=Tr(SWAP_A_1R,A_2RY_BLSWAP_B_1L,B_2L)^*
It is convenient to write
Tr (SWAP_A_1R,A_2RY_ARSWAP_B_1L,B_2L)
=|Tr(SWAP_A_1R,A_2RY_ARSWAP_B_1L,B_2L)|e^iθ
Tr (SWAP_A_1R,A_2RY_BLSWAP_B_1L,B_2L)
=|Tr(SWAP_A_1R,A_2RY_ARSWAP_B_1L,B_2L)|e^-iθ,
so that
Tr(SWAP_A_1R,A_2RY_ARSWAP_B_1L,B_2L)/Tr(SWAP_A_1R,A_2RY_BLSWAP_B_1L,B_2L)=e^2iθ.
Substituting this into (<ref>), we obtain
ind(U)
=√(Tr(Y_ALSWAP_A_1L,A_2L)Tr(SWAP_B_1R,B_2R)/Tr(SWAP_A_1L,A_2L)Tr(SWAP_B_1R,B_2RY_BR)e^2iθ).
To further simplify the above expression, consider U_1,2 acting on SWAP_A_1,A_2⊗SWAP_B_1,B_2:
U_1,2^† (SWAP_A_1,A_2⊗SWAP_B_1,B_2)U_1,2
=(Y_ALSWAP_A_1,A_2)⊗(SWAP_B_1,B_2Y_BR).
Taking the trace of both sides, we obtain
Tr(SWAP_A_1L,A_2L)Tr(SWAP_B_1R,B_2R)
=Tr(Y_ALSWAP_A_1L,A_2L)Tr(SWAP_B_1R,B_2RY_BR),
where we dropped the common factor of Tr(SWAP_A_1R,A_2R)Tr(SWAP_B_1L,B_2L) on both sides. It follows that
Tr(Y_ALSWAP_A_1L,A_2L)/Tr(SWAP_A_1L,A_2L)=Tr(SWAP_B_1R,B_2R)/Tr(SWAP_B_1R,B_2RY_BR),
so we can rewrite the index as
ind(U)=√((Tr(Y_ALSWAP_A_1L,A_2L)/Tr(SWAP_A_1L,A_2L))^2e^2iθ).
In order for this quantity to be real, we must have
Tr(Y_ALSWAP_A_1L,A_2L)/Tr(SWAP_A_1L,A_2L)=|Tr(Y_ALSWAP_A_1L,A_2L)/Tr(SWAP_A_1L,A_2L)|e^-iθ.
This is consistent with Y_AL and Y_AR carrying opposite phases, since we can add a phase e^iθ to Y_AR as long as it cancels with the phase we add to Y_AL, so that
Tr (Y_ALSWAP_A_1L,A_2L)Tr(SWAP_A_1R,A_2RY_AR)
=Tr(SWAP_A_1L,A_2L)Tr(SWAP_A_1R,A_2R).
Putting this together, we obtain
ind(U) =|Tr(Y_ALSWAP_A_1L,A_2L)/Tr(SWAP_A_1L,A_2L)|
=|Tr(SWAP_B_1R,B_2R)/Tr(SWAP_B_1R,B_2RY_BR)|,
which is precisely π_0(U⊗ U)/π_1(U⊗ U) for the ℤ_2 symmetry generated by SWAP_Λ_1,Λ_2. Since π_0(U⊗ U)=ind(U)^2, we have
ind(U)=π_1(U⊗ U).
In addition, using (<ref>), we have
1/ind(U)=|Tr(R_1)/Tr(U_1,R)|,
where R_1 is defined for U_1,2 acting on the doubled system (not U).
As a sanity check, suppose that U acts as a translation by a single site to the right on a chain of d-state spins. Then Y_AL is a SWAP operator on the leftmost site in A and
Tr(Y_ALSWAP_A_1L,A_2L)/Tr(SWAP_A_1L,A_2L)=d^2d^N_AL-1/d^N_AL=d,
where N_AL=1/2|A_1|=1/2|A_2|. Alternatively, Y_BR is a SWAP operator on one additional site to the right of B_R, so
Tr(SWAP_B_1R,B_2R)/Tr(SWAP_B_1R,B_2RY_BR)=d^N_BRd^2/d^N_BR+1=d,
as expected.
Note that the above derivation suggests a way to measure π_1(U) for a general ℤ_2 symmetric QCA:
π_0(U)/π_1(U)=√(Tr(U^† U_1,AU U_1,B)/Tr(U_1,AU^† U_1,BU)),
where A and B are adjacent intervals and U_1,A and U_1,B are the symmetry actions for the nontrivial group element on these intervals.
§.§ Mutual information formulation of the GNVW index
We will now show that (<ref>) and a second Renyi version of the mutual information formula for the GNVW index given in Ref. tracking (see also Ref. ranard2022) are equal.
The setup in Ref. tracking, as shown in Fig. <ref>, consists of U acting on a periodic spin chain whose sites we label [-2N+1,2N], with site -2N identified with site 2N. Note that Ref. tracking considered a 2D disk evolving under a Floquet circuit, but in this work we will only consider a 1D spin chain consisting of the spins near the edge of the disk, evolving under the effective edge unitary of the 2D Floquet circuit. We label the left half of the spin chain as A=[-2N+1,0] and the right half of the spin chain as B=[1,2N]. In addition, we include a chain of ancillas with the same local Hilbert space as the original sites, each maximally entangled with a state on spin chain, on the interval [-N+1,N]. The density matrix for the entire system including the ancillas has the form
ρ=ρ_[-2N+1,-N]⊗ρ_[-N+1,N]^a⊗ρ_[N+1,2N],
where ρ_[-2N+1,-N] and ρ_[N+1,2N] are density matrices of the spins in the spin chain in a product state and ρ_[-N+1,N]^a is the density matrix describing each spin on the chain maximally entangled with its partner ancilla. The action of U on ρ gives
U^†ρ U=ρ̃.x
Note that we evolve the density matrix using U^†ρ U rather than with U and U^† switched, because we use the convention that a translation in the positive direction moves operators in the positive direction and states in the opposite direction. This is also the convention used in Refs. u1floquet,flows.
In this setup, Ref. tracking proposed the following equation for logind(U):
ν =logind(U)
=ℐ(a_A,ρ_B)-ℐ(a_B,ρ_A).
Here, a_A(a_B) are the reduced density matrices of the ancillas associated with sites [-N+1,0]([1,N]) and ρ_A(ρ_B) are the reduced density matrices of the spins on sites [-2N+1,0]([1,2N]). Specifically, a_A is the reduced density matrix obtained by starting with ρ̃ and tracing out all the spins and ancillas other than the ancillas paired with spins on sites [-N+1,0]. The mutual information ℐ(A,B) between any two sets of sites A and B is defined as
ℐ(A,B)=1/2[S(ρ_A)+S(ρ_B)-S(ρ_A∪ B)].
Using the complementarity property of entanglement, (<ref>) simplifies to<cit.>
ν=1/2[S(ρ_B)-S(ρ_A)].
We will now relate a second Renyi version of this quantity to (<ref>). We will use the second Renyi entropy because it can be conveniently written using a SWAP operator on a doubled system<cit.>:
S_2(ρ_A) =-logTr(ρ_A^2)
=-logTr(SWAP_A_1,A_2(ρ⊗ρ)),
where SWAP_A_1,A_2 acts on two identical copies of the original system. In terms of Renyi entropies, (<ref>) reads
ν=1/2logTr(SWAP_A_1,A_2(ρ̃⊗ρ̃))/Tr(SWAP_B_1,B_2(ρ̃⊗ρ̃)).
Using cyclicity of the trace, we obtain
Tr (SWAP_A_1,A_1(ρ̃⊗ρ̃))
=Tr(U_1,2SWAP_A_1,A_2U_1,2^† (ρ⊗ρ)).
ν is additive under composition of QCA, so ν→-ν for U→ U^†. Therefore, we can write ind(U) from (<ref>) as
ind(U)=√(Tr(U_1,2^†SWAP_B_1,B_2U_1,2(ρ⊗ρ))/Tr(U_1,2^†SWAP_A_1,A_2U_1,2(ρ⊗ρ))).
Now we can use (<ref>) to obtain
Tr (U_1,2^†SWAP_B_1,B_2U_1,2(ρ⊗ρ))
=Tr(Y_BLSWAP_B_1,B_2Y_BR(ρ⊗ρ))
=Tr(Y_BLSWAP_B_1L,B_2Lρ_[-N+1,N])
×Tr(SWAP_B_1R,B_2RY_BRρ_[N+1,-N]).
In the last line, we factorized the trace and use ρ_[N+1,-N] as shorthand for ρ_[N+1,2N]ρ_[-2N+1,-N]. The SWAP operators, as well as Y_BL and Y_BR, do not act on the ancillas. Therefore, we can easily trace over the ancillas to get
Tr (U_1,2^†SWAP_B_1,B_2U_1,2(ρ⊗ρ))
=1/d^4NTr(Y_BLSWAP_B_1L,B_2L)
×Tr(SWAP_B_1R,B_2RY_BRρ_[N+1,-N]).
We can apply similar manipulations to the denominator of (<ref>). Then, using the fact that ρ_[N+1,-N] is a product state density matrix and Tr(Y_BRρ_[N+1,-N])=Tr(Y_ALρ_[N+1,-N])^*, we get
ind(U)=√(|Tr(Y_BLSWAP_B_1LB_2L)/Tr(SWAP_A_1R,A_2RY_AR)|).
This is equal to GNVW index as written in (<ref>). Specifically, (<ref>) is obtained by taking the square root of the product of the first and second lines of (<ref>), then using |A|=|B| and a relabeling of A and B.
§ DISCUSSION
In this note, we have presented a few results on the classification of QCA with symmetry, using strong equivalence. We rederived the classification given in Ref. mpu without the extra framework of tensor networks. Furthermore, we showed that the refined SPI of Ref. mpu are only complete for certain symmetry groups. For the particular case of G=ℤ_2, we were able to obtain a complete classification, and furthermore show that the GNVW index can be interpreted in terms of a ℤ_2 SPI in a doubled system. Using this SPI formulation of the GNVW index, we derived a second Renyi version of the mutual information formulation of the index studied in Ref. tracking.
A number of interesting questions remain. First, we still do not have a concrete formula for invariants that completely classify QCA with discrete symmetry, when the group is not ℤ_N for N prime. The main difficulty is finding easily computable quantities that remove the ambiguity of the 1D representation attached to R_g.
Second, we showed that a second Renyi version of the mutual information formula of Ref. tracking matches with the GNVW index if we start in a state where the ancillas are maximally entangled with their respective spin in the spin chain. However, Ref. tracking showed numerically that the mutual information formula still well approximates the GNVW index even when the ancillas are not maximally entangled with their respective spin. A deeper study of generalizations of the derivation presented in this work may help understand this intriguing robustness of the mutual information calculation.
We thank Michael Levin and Zongping Gong for helpful discussions and for comments on the draft. We especially thank Benjamin Krakoff for collaboration on the results presented in Sec. <ref>.
This work was supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1746045 and the University of Chicago Bloomenthal Fellowship.
§ DERIVATION OF THE CLASSIFICATION
In this appendix, we will show that 1D QCA with G symmetry are equivalent if and only if their corresponding representations R_g as defined in (<ref>) differ only by a 1D representation u_g and tensoring with copies of the on-site representation μ_g. Due to the stacking structure of the QCA, we only need to prove that W is a G symmetric FDQC if and only if R_g=u_gμ_g^⊗ N for some nonnegative integer N. The methods used here closely follow those used in Ref. u1floquet.
First, we will show that if R_g=u_gU_g,R=u_gμ_g^⊗ N, then W is a G symmetric FDQC. Since dim(R_g)=dim(u_gU_g,R), the GNVW index must be trivial. Therefore, without loss of generality, we assume that W is a depth two circuit where each layer consists of disjoint local unitaries acting over a distance ξ. We now cluster together neighboring sites into supersites such that each gate of the FDQC acts on two neighboring supersites, and set ξ=1 in units of the supersites. The unitary representation of G for a supersite is simply μ_g^⊗ N=U_g,R where N is the number of original sites per supersite. We will denote U_g,R by ρ_g for brevity, and we denote the symmetry action on site i by ρ_g^(i). W is given by
W = W_2W_1= ∏_i W_2^(2i-1,2i)∏_i W_1^(2i,2i+1) .
This decomposition of W is illustrated in Fig <ref>.
Suppose we act W on ρ_g^(0)⊗ρ_g^(1). Because W is an LPU and W commutes with the symmetry operators for the entire system U_g, we have
W(ρ_g^(0)⊗ρ_g^(1))W^†=(Y_g,Lρ_g^(0))⊗(ρ_g^(1)Y_g,R),
where Y_g,L and Y_g,R are operators supported on supersites [-1,0] and [1,2] respectively. According to the notation of (<ref>), Y_g,Lρ_g^(0)=L_g and ρ_g^(1)Y_g,R=R_g.
Notice that there is a phase ambiguity in the definition of Y_g,L and Y_g,R. While Y_g,L⊗ Y_g,R forms a linear representation of G, Y_g,L and Y_g,R individually form projective representations. We consider only trivial projective representations to focus on strong equivalence phases. Specifically, in the case of trivial projective representations, we can lift Y_g,R to a linear representation by relabeling Y_g,R→ e^-iθ_gY_g,R and similarly relabeling Y_g,L, where e^-iθ_g∈ U(1). The result however still has an ambiguity associated with multiplying Y_g,R with a 1D representation of G, denoted u_g. We will revisit this ambiguity shortly.
Using the form of W in Eq. <ref>, we obtain
W_1(ρ_g^(0)⊗ρ_g^(1))W_1^†=W_2^†(Y_g,Lρ_g^(0)⊗ρ_g^(1)Y_g,R)W_2.
Writing W_1 and W_2 in terms of two-site gates gives:
W_1^(0,1)(ρ_g^(0)⊗ρ_g^(1))W_1^(0,1)†
=W_2^(-1,0)†W_2^(1,2)†(Y_g,Lρ_g^(0)⊗ρ_g^(1)Y_g,R)W_2^(1,2)W_2^(-1,0)
=W_2^(-1,0)†(Y_g,Lρ_g^(0))W_2^(-1,0)⊗ W_2^(1,2)†(ρ_g^(1)Y_g,R)W_2^(1,2).
Substituting
ρ̃_g^(0) =W_2^(-1,0)†(Y_g,Lρ_g^(0))W_2^(-1,0)
ρ̃_g^(1) =W_2^(1,2)†(ρ_g^(1)Y_g,R)W_2^(1,2)
gives
W_1^(0,1)(ρ_g^(0)⊗ρ_g^(1))W_1^(0,1)†=ρ̃_g^(0)⊗ρ̃_g^(1).
The left side is supported only on sites [0,1] and the right side has two terms, one supported on [-1,0] and one supported on [1,2]. W being trivial under strong equivalence means that it can be made G symmetric. This means that there exists W̃_1 and W̃_2 that are individually G symmetric. This is the case if and only if there are on-site unitaries V_i such that
V_0ρ̃_g^(0)V_0^† =ρ_g^(0) V_1 ρ̃_g^(1)V_0^†=ρ_g^(1).
Then we can define
W̃_2^(i,j)=W_2^(i,j)V_j^† V_i^† W̃_1^(i,j)= V_iV_jW_1^(i,j).
We can see that W̃_1 is G-symmetric:
V_1 V_0 W_1^(0,1)(ρ_g^(0)⊗ρ_g^(1))W_1^(0,1)†V_0^† V_1^†
=V_1 V_0(ρ̃_g^(0)⊗ρ̃_g^(1))V_0^† V_1^†
=ρ_g^(0)⊗ρ_g^(1),
and since W is G-symmetric and W=W_2W_1=W̃_2W̃_1, W̃_2 is also G-symmetric.
Notice that as long as ρ_g^(0) and Y_g,Lρ_g^(0) have the same eigenvalues up to multiplication by a 1D representation of G (as do ρ_g^(1) and ρ_g^(1)Y_g,R), there exists such on-site unitaries V_i with action given by (<ref>). In this case,
Tr(ρ_gY_g,R)=u_gTr(ρ_g) ∀ g∈ G,
where u_g is a 1D representation of G. This also means that the representation ρ_g and the representation ρ_gY_g,R=R_g must be equivalent modulo multiplication by 1D representations of G. Since ρ_g=μ_g^⊗ N, we obtain the desired result.
We now show that if W is a G symmetric FDQC (with the addition of ancillas carrying the μ_g representation of G), then R_g=u_gρ_g^⊗ N for some nonnegative integer N. Since W is a FDQC, for an interval A larger than 2ξ, we can write
W=W_AW_A^cW_LW_R,
where W_A is supported fully in A, W_A^c is supported fully outside of A, and W_L and W_R are supported near the left and right endpoints of A respectively. The fact that W is a G symmetric FDQC means that the above four unitary operators all G symmetric. It follows that
[W_A,U_g,LU_g,R]=0.
Furthermore, because |A|>2ξ, we see that due to disjoint operator support, we have
[W_L,U_g,R]=[W_R,U_g,L]=[W_A^c,U_g,LU_g,R]=0.
Using (<ref>) and (<ref>), we have
W^†(U_g,LU_g,R) W =W_R^† W_L^†(U_g,LU_g,R)W_LW_R
=(W_L^† U_g,LW_L)(W_R^† U_g,RW_R).
Adding ancillas carrying the same representation of G corresponds to tensoring copies of μ_g to U_g,L and U_g,R and adding symmetric gates to W to couple the ancillas with the original spins. Denote the representation of the symmetry with the ancillas by U_g,R' and W with the ancillas by W'=W_A'W_A^c'W_L'W_R'. Since (W_R^† U_g,RW_R) and R_g are both supported to the right of the midpoint of A, we have
R_g=u_gW_R^†' U_g,R'W_R'
Similarly, we have
L_g=u_g^† W_L^†' U_g,L'W_L'
where again, u_g is a 1D representation of G. Since W_R' and W_L' are unitary operators, they cannot change the spectrum of U_g,R' and U_g,L' respectively. It follows that R_g=u_gμ_g^⊗ N and L_g=u_g^†μ_g^⊗ N for some nonnegative integer N, as desired. Notice that if we allowed the addition of ancillas carrying any representation of the symmetry, then we can tensor any representation to U_g,L and U_g,R, so R_g can be any linear representation of G.
|
http://arxiv.org/abs/2306.11124v1
|
20230619190057
|
KiDS-1000: Cosmology with improved cosmic shear measurements
|
[
"Shun-Sheng Li",
"Henk Hoekstra",
"Konrad Kuijken",
"Marika Asgari",
"Maciej Bilicki",
"Benjamin Giblin",
"Catherine Heymans",
"Hendrik Hildebrandt",
"Benjamin Joachimi",
"Lance Miller",
"Jan Luca van den Busch",
"Angus H. Wright",
"Arun Kannawadi",
"Robert Reischke",
"HuanYuan Shan"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
Improved KiDS-1000 cosmic shear
S.-S. Li et al.
Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, the Netherlands
[email protected]
E.A Milne Centre, University of Hull, Cottingham Road, Hull, HU6 7RX, United Kingdom and Centre of Excellence for Data Science, AI, and Modelling (DAIM), University of Hull, Cottingham Road, Kingston-upon-Hull, HU6 7RX
Center for Theoretical Physics, Polish Academy of Sciences, al. Lotników 32/46, 02-668 Warsaw, Poland
Instituto de Ciencias del Cosmos (ICC), Universidad de Barcelona, Martí i Franquès, 1, 08028 Barcelona, Spain
Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, UK
Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB), German Centre for Cosmological Lensing, 44780 Bochum, Germany
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK
Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544, USA
Shanghai Astronomical Observatory (SHAO), Nandan Road 80, Shanghai 200030, China
Key Laboratory of Radio Astronomy and Technology, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing 100101, China
University of Chinese Academy of Sciences, Beijing 100049, China
We present refined cosmological parameter constraints derived from a cosmic shear analysis of the fourth data release from the Kilo-Degree Survey (KiDS-1000). Our refinements are driven by enhanced galaxy shape measurements using an updated version of the lensfit code, and improved shear calibration achieved with a newly developed suite of multi-band image simulations. Additionally, we incorporate recent advancements in cosmological inference from the joint Dark Energy Survey Year 3 and KiDS-1000 cosmic shear analysis. Assuming a spatially flat standard cosmological model, we constrain S_8≡σ_8(Ω_ m/0.3)^0.5 = 0.776_-0.027-0.003^+0.029+0.002, where the second set of uncertainties accounts for the systematic uncertainties within the shear calibration. These systematic uncertainties stem from minor deviations from realism in the image simulations and the sensitivity of the shear measurement algorithm to the morphology of the galaxy sample. Despite these changes, our results align with previous KiDS studies and other weak lensing surveys, and find a ∼2.3σ level of tension with the Planck cosmic microwave background constraints on S_8.
KiDS-1000: Cosmology with improved cosmic shear measurements
Shun-Sheng Li1
Henk Hoekstra1
Konrad Kuijken1
Marika Asgari2
Maciej Bilicki3
Benjamin Giblin4,5
Catherine Heymans5,6
Hendrik Hildebrandt6
Benjamin Joachimi7
Lance Miller8
Jan Luca van den Busch6
Angus H. Wright6
Arun Kannawadi9
Robert Reischke6
HuanYuan Shan10,11,12
Received XXX, XXXX; accepted YYY, YYYY
========================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Weak gravitational lensing by large-scale structure, also known as cosmic shear, is a powerful technique for studying the matter distribution in the Universe without assuming a specific correlation between dark and baryonic matter (e.g. )[However, with increasing precision in weak lensing observations, the impact of baryonic processes, such as radiative cooling and feedback from star formation and active galactic nuclei, on the observed matter distribution can no longer be ignored for small-scale structures (e.g. ).]. Owing to its remarkable potential in exploring the cosmic matter distribution, cosmic shear analysis gained popularity since its first detection over twenty years ago <cit.>. When distance information for source galaxies is also known, we can differentiate them along the line of sight and perform a tomographic analysis, which entails reconstructing the three-dimensional matter distribution from multiple two-dimensional projections. This tomographic cosmic shear analysis is especially effective for constraining dark energy properties, as it sheds light on the evolution of cosmic structures (e.g. ).
Recent surveys, such as the Kilo-Degree Survey (KiDS, ), the Dark Energy Survey (DES, ), and the Hyper Suprime-Cam (HSC) survey <cit.>, primarily focus on constraining the amplitude of matter density fluctuations. Conventionally, this quantity is characterised by the parameter S_8≡σ_8(Ω_ m/0.3)^0.5, where Ω_ m is the matter density parameter and σ_8 is the standard deviation of matter density fluctuations in spheres of radius 8h^-1 Mpc, computed using linear theory, where the Hubble constant H_0=100h km s^-1 Mpc^-1. Interestingly, the S_8 values derived from these weak lensing surveys are consistently lower than those predicted by cosmic microwave background (CMB) observations from the Planck satellite.
Specifically, the latest cosmic shear analyses from KiDS (0.759_-0.021^+0.024, , A21 hereafter), DES (0.759_-0.023^+0.025, ), and HSC (0.769_-0.034^+0.031, ; 0.776_-0.033^+0.032, ) provide S_8 values that are roughly 2σ lower than the Planck predictions (0.832± 0.013, ) based on the standard spatially flat Λ cold dark matter (ΛCDM) cosmological model. Most recently, a joint cosmic shear analysis of the DES Y3 and KiDS-1000 by the two survey teams (, DK23 hereafter) yields an S_8 constraint of 0.790^+0.018_-0.014, which is closer to the Planck results, but still shows a level of 1.7σ difference. This mild difference in the S_8 constraints between the weak lensing surveys and CMB observations triggered extensive discussions from various perspectives, encompassing potential systematic errors in the data (e.g. ), the influence of the baryonic physics (e.g. ), and a potential deviation from the standard ΛCDM model (see for a recent review).
Here, we focus on the control of systematics in the cosmic shear analysis, particularly those arising during the KiDS shear measurement process. Measuring lensing-induced shear from noisy pixelised galaxy images is a challenging task, complicated further by distortions caused by the point spread function (PSF) resulting from instrumental and observational conditions, as well as blending effects that arise when two or more objects are close on the sky (see for a review). These factors can introduce significant measurement biases (e.g. ) and alter the selection function of the source sample, leading to selection bias (e.g. ). Therefore, obtaining unbiased shear measurements relies on careful calibration, which can be performed using either pixel-level image simulations (e.g. ; , FC17 hereafter; ) or the data themselves (e.g. ).
Additionally, in the case of large-area imaging surveys, determining the distance information for individual source galaxies depends on redshifts derived from broad-band photometric observations. These photometric redshift estimates, which are subject to significant uncertainty, require careful calibration using spectroscopic reference samples (e.g. ). Furthermore, recent studies showed that the blending of source images results in the coupling of shear and redshift biases (e.g. ; , L23 hereafter). Consequently, a joint calibration of these two estimates becomes essential, necessitating the use of multi-band image simulations for future cosmic shear analyses.
In light of all these concerns, we implemented several improvements to the cosmic shear measurements in KiDS, as detailed in . We enhanced the accuracy of the galaxy shape measurements by using an upgraded version of the lensfit code <cit.>, complemented by an empirical correction scheme that reduces PSF contamination. More notably, in we introduced SKiLLS (SURFS-based KiDS-Legacy-Like Simulations), a suite of multi-band image simulations that enables joint calibration of shear and redshift estimates. This is an important element for the forthcoming weak lensing analysis of the complete KiDS survey, known as the KiDS-Legacy analysis (Wright et al. in prep.).
In this paper, we take an intermediate step towards the forthcoming KiDS-Legacy analysis by applying the improvements from to a cosmic shear analysis based on the fourth data release of KiDS (KiDS-1000, ). In contrast to previous KiDS cosmic shear analyses, which used shear calibration methods developed in and <cit.> based on single-band image simulations, the current analysis adopted SKiLLS, marking the first instance of multi-band image simulations being used for KiDS cosmic shear analysis[ did attempt to assign photo-z estimates from data to simulations, but the actual photo-z measurements were not simulated.]. We also incorporated recent advancements in cosmological inference and updated the current cosmological parameter constraints from KiDS. In particular, we updated the code for the non-linear evolution of the matter power spectrum calculation from hmcode to the latest hmcode-2020 version <cit.>. We also investigated the impact of the intrinsic alignment model by incorporating amplitude priors inspired by <cit.>.
The remainder of this paper is structured as follows. In Sect. <ref>, we introduce and validate the updated KiDS shear catalogue, followed by the shear and redshift calibration in Sect. <ref>. We describe our cosmological inference method in Sect. <ref> and present the results in Sect. <ref>. Finally, we summarise the results in Sect. <ref>.
§ UPDATED WEAK LENSING SHEAR CATALOGUE
Our shear catalogue is based on the KiDS-ESO-DR4 data release <cit.>, which combines optical observations in the ugri bands from KiDS using the ESO VLT Survey Telescope <cit.> and near-infrared observations in the ZYJHK_s bands from the ESO VISTA Kilo-degree INfrared Galaxy (VIKING) survey using the VISTA telescope <cit.>. The data set covers 1006 deg^2 survey tiles and includes nine-band photometry measured using the GAaP technique <cit.>. The photometric redshifts (photo-zs) for individual source galaxies were estimated using the bpz code <cit.>. After masking, the effective area of the data set in the CCD pixel frame is 777.4 deg^2 <cit.>. To perform the cosmic shear analysis, we divided the source sample into five tomographic bins based on the bpz estimates (z_ B). The first four bins have a spacing of Δ z_ B=0.2 in the range 0.1<z_ B≤ 0.9, while the fifth bin covers the range 0.9<z_ B≤ 1.2, following the previous KiDS cosmic shear analyses.
§.§ Galaxy shapes measured with the updated lensfit
When preparing the shear measurements for the upcoming data release of KiDS, we upgraded the lensfit code <cit.> from version 309c to version 321 (see for details). The latest version includes a correction to an anisotropic error in the original likelihood sampler, which previously caused a small yet noticeable residual bias that was not related to the PSF or underlying shear <cit.>. We used the new code to re-measure the galaxy shapes in the KiDS-ESO-DR4 data set, resulting in a new shear catalogue. Throughout the paper, we refer to the new shear catalogue as KiDS-1000-v2 to distinguish it from the previous KiDS-1000(-v1) shear catalogue.
The raw measurements from the lensfit code suffer from biases primarily due to the PSF anisotropy, but also because of the object selection and weighting scheme. To address these biases, introduced an empirical correction scheme to isotropise the original measurement weights, which was used in previous KiDS studies (see also ). However, this approach is insufficient for the current version of the lensfit code. Furthermore, found that the method was susceptible to variations in the sample size, posing challenges for consistent application to both data and simulations.
Therefore, a new correction scheme was introduced by that modifies both the measured ellipticities and weights to ensure the average PSF leakage, defined as the fraction of the PSF ellipticity leaking into the shear estimator, is negligible in each tomographic bin. For further details, we direct readers to . In summary, the new correction scheme first isotropises the measurement weights, then adjusts the measured ellipticities to eliminate any remaining noise bias and selection effects. We note that this correction scheme is not designed to refine the shape measurements of individual galaxies; rather, it aims to ensure that the collectively weighted shear signal is robust against PSF leakage. In this paper, we applied this newly developed empirical correction to the KiDS-1000-v2 shear catalogue.
§.§ Validation of the shear estimates
In order to use the weak lensing shear catalogue for cosmological inference, it is crucial to first verify the accuracy of the shear estimation and ensure that the residual contamination from systematic effects is within the acceptable level for scientific analysis. To achieve this, <cit.> proposed a series of null-tests to assess the robustness of the KiDS-1000-v1 shear catalogue. With the updated galaxy shape measurements in the KiDS-1000-v2 catalogue, it is necessary to repeat some of these tests to confirm the reliability of the new catalogue.
As the KiDS-1000-v2 catalogue updates only the galaxy shape measurements while maintaining the established photometry and PSF models, we did not repeat tests related to photometry and PSF modelling. We started by examining the PSF leakage in the weighted lensfit shear estimator, using the first-order systematics model proposed by <cit.>. This model takes the form <cit.>
ϵ_k^ obs=(1+m_k)(ϵ_k^ int+γ_k) + α_kϵ_k^ PSF + c_k , [k=1,2] ,
where ϵ^ obs denotes the measured galaxy ellipticity, m is the multiplicative shear bias[Throughout this paper, we interchangeably use `multiplicative bias' and `shear bias', as our simulation-based shear calibration only addresses this parameter. Conversely, PSF leakage and the additive term are empirically corrected.], ϵ^ int refers to the intrinsic galaxy ellipticity, γ stands for the cosmic shear signal (which is the parameter of interest), α is the PSF leakage factor, and c is an additive term comprising residual biases unrelated to the PSF or underlying shear. The subscript k=1,2 denotes the two ellipticity components. We note that we did not include PSF modelling errors in Eq. (<ref>), as we used the same PSF model as <cit.>, who had already confirmed its accuracy. Assuming that (ϵ_k^ int+γ_k) averages to zero for a large galaxy sample (a property validated with the KiDS data; see, for example, Sect. 3 in ), we can determine the α and c parameters from the data using a simple linear regression method.
Figure <ref> presents the measured PSF leakage α and the additive term c for the KiDS-1000-v2 catalogue, alongside the measurements from the KiDS-1000-v1 catalogues for comparison. As expected, the KiDS-1000-v2 catalogue exhibits a mean α-term consistent with zero for all redshift bins, owing to the empirical correction scheme outlined in Sect. <ref> (see also Sect. 4 in ). The upgraded lensfit code has reduced the overall c_2-term by half, reaching a level of c_2∼ (3± 1)× 10^-4 for the entire sample. However, despite this improvement, the c term has not been eliminated, particularly in distant tomographic bins where a small but noticeable c term still persists, which was not seen in the simulations.
To correct for these residual small additive c-terms, we used the same empirical correction method as in previous KiDS analyses. Specifically, we subtracted the weighted average ellipticity from the observed ellipticity for each redshift bin as ϵ_ corr^ obs=ϵ^ obs-ϵ^ obs. Nevertheless, we caution that subtracting the mean c-term does not guarantee the removal of all additive biases, especially when detector-level effects, such as `charge transfer inefficiency' (e.g. ) and `pixel bounce' (e.g. ), can introduce position-dependent bias patterns. Although we have detected such effects in KiDS data <cit.>, their level does not affect the current cosmic shear analysis. More specifically, <cit.> demonstrated that even if current detector-level effects were increased by a factor of ten, they would not cause significant bias for KiDS-like analyses.
The cosmic shear signal is conventionally measured using the two-point shear correlation function, defined as[In this study, all measurements of the two-point shear correlation function are conducted using the TreeCorr code <cit.>.]
ξ̂^ij_±(θ)=∑_abw_aw_b[ϵ_t^i(x_a)ϵ_t^j(y_b)±ϵ_×^i(x_a)ϵ_×^j(y_b)]/∑_abw_aw_b ,
where θ represents the separation angle between a pair of galaxies (a, b), the tangential and cross ellipticities ϵ_t, × are computed with respect to the vector x_a - y_b that connects the galaxy pair, and the associated measurement weight is denoted by w. Therefore, it is crucial to examine the systematics in the two-point statistics. Following the method of <cit.>, we estimate the PSF leakage into the two-point correlation function measurement using
ξ^ sys_± = ⟨ϵ^ obsϵ^ PSF⟩^2/⟨ϵ^ PSFϵ^ PSF⟩ ,
where the ⟨·⟩ represents the correlation function.
In Fig. <ref>, we present the ratio of the measured ξ^ sys_+ to the theoretical predictions of the cosmic shear signal. The blue shaded region denotes ± 10% of the standard deviation of the cosmic shear signal, extracted from the analytical covariance. This covariance is calculated using an independent implementation of the methodology of <cit.>, and it incorporates the sample statistics of the updated catalogue. We compare the results from the KiDS-1000-v2 catalogue with those from the KiDS-1000-v1 catalogue. We observe general improvements, particularly in the high-redshift bins, where the PSF contamination is now negligible. The only exceptions are found in some large-scale bins (θ>60 arcmin), where the expected fiducial cosmic shear signal is relatively small and overwhelmed by high statistical noise.
To leading order, the weak lensing effect introduces only curl-free gradient distortions (E-mode signal), which makes the curl distortions (B-mode signal) a useful null-test for residual systematics in the shear measurement[Some higher-order effects from lensing, such as source redshift clustering (e.g. ), and intrinsic alignment of nearby galaxies (e.g. ) can also introduce B-mode signals. However, these contributions are expected to be negligible for current weak lensing surveys (e.g. )]. Following the convention of KiDS <cit.>, we use the complete orthogonal sets of E/B-integrals (COSEBIs, ) to measure the B-mode signal. The COSEBIs provide an optimal E/B separation by combining different angular scales from the ξ̂_± measurements.
Figure <ref> presents the measured B-mode signals for all combinations of tomographic bins in our analysis, alongside the B-mode measurements from the KiDS-1000-v1 catalogue for comparison. To enable direct comparison, we used the same scale range of (05, 300) as in <cit.> for calculating the COSEBIs B-mode[We also evaluated an alternate scale range of (2, 300), consistent with our fiducial cosmic shear analysis. As anticipated, the B-mode signal was more negligible in this scenario due to reduced small-scale contamination.]. Assuming a null signal, we computed the p-value for each B-mode measurement, setting the degrees of freedom equal to the number of modes in each measurement (n=20). The covariance matrix, accounting only for shot noise, was estimated using an analytical model from <cit.> applied to the updated catalogue. It is noteworthy that our covariance matrix differs from the one used in <cit.>. This is due to the changes in sample statistics resulting from the updated shape measurement code and redshift calibration relative to the KiDS-1000-v1 catalogue used in <cit.>. Most diagonal entries in our matrix show reduced uncertainties, ranging from a level of per cent to ten per cent. Therefore, if the absolute systematic levels are comparable between the two catalogues, our test would likely show a slight increase in the final p-values compared to those in <cit.>. As indicated in the top right corner of each panel, the estimated p-values suggest that the measured B-mode signals align with a null signal across all bin combinations. The lowest p-value, p=0.02, was found in the cross-correlation between the first and third tomographic bins.
After conducting all these tests, we can conclude that the KiDS-1000-v2 catalogue has reduced systematics when compared to the results from the KiDS-1000-v1 catalogue. These improvements are largely attributed to the updated version of the lensfit code, as well as the implementation of a new empirical correction scheme for PSF contamination. These results give us the confidence to use the updated catalogue for cosmological inference.
§ SHEAR AND REDSHIFT CALIBRATION
The main improvement in our calibration comes from the use of SKiLLS multi-band image simulations, as developed in . These simulations fuse cosmological simulations with high-quality observational data to create mock galaxies with photometric and morphological properties closely resembling real-world galaxies. The observational data used by SKiLLS, drawn from the catalogue of <cit.>, is identical to that used in . In , we developed a vine-copula-based algorithm that learns the measured morphological parameters from this catalogue and assigns them to the SURFS-Shark mock galaxies <cit.>. We verified that the learning procedure maintains the observed multi-dimensional correlations between morphological parameters, magnitude, and redshifts. Nevertheless, both the observed catalogue from <cit.> and the learning algorithm possess inherent limitations, resulting in unavoidable uncertainties in our simulation input catalogue. These uncertainties are addressed in our shear calibration in Sect. <ref>.
To create KiDS+VIKING-like nine-band images, SKiLLS replicated the instrumental and observational conditions of 108 representative tiles selected from six sky pointings evenly distributed across the KiDS-DR4 footprint. The star catalogue was generated for each sky pointing using the Trilegal population synthesis code <cit.> to account for the variation in stellar densities across the footprint. For the primary r-band images, on which the galaxy shapes were measured, SKiLLS included the correlated pixel noise introduced by the stacking process and the PSF variation between CCD images.
On the data processing side, SKiLLS followed the entire KiDS procedure, including object detection, PSF homogenisation, forced multi-band photometry, photo-z estimation, and shape measurements. The end result is a self-consistent joint shear-redshift mock catalogue that matches KiDS observations in both shear and redshift estimates. By taking this end-to-end approach, we accounted for photo-z-related selection effects in our shear bias estimation and enabled redshift calibration using the same mock catalogue. While our current analysis focuses on the improvement in shear calibration, it represents an intermediate step towards the KiDS-Legacy analysis, which will implement joint shear and redshift calibrations facilitated by the SKiLLS mock catalogue.
§.§ Calibration
To correct for shear bias in our measurements, we followed the method used in previous KiDS studies (, ). We applied an average shear bias correction factor, denoted as m^i, to each tomographic bin i. This factor was calculated by averaging the individual m values of all sources within the bin, with each individual m value obtained using Eq. (<ref>). In order to better align the simulations with the target data, we adhered to KiDS conventions by re-weighting the simulation estimates using the lensfit reported model signal-to-noise ratio and resolution, which is defined as the ratio of the PSF size to the measured galaxy size. More information on the re-weighting procedure can be found in Sec. 5.1 of .
Although the averaging method addresses the noise in individual source's m estimation, it does not account for correlations involving shear bias. Thus, we have ⟨[1+m^i(θ')][1+m^j(θ'+θ)]⟩= (1+m^i)(1+m^j), with θ and θ' representing different separation angles between galaxy pairs. To test this assumption, we directly measured ⟨[1+m^i(θ')][1+m^j(θ'+θ)]⟩ from image simulations and compared it to (1+m^i)(1+m^j). Further details on this test can be found in Appendix <ref>. In summary, we found a negligible difference between the two estimators, a result that falls well within the current KiDS requirements. This validates the assumption for the KiDS analysis.
Given that the updated galaxy shape measurements also lead to changes in the sample selection function, it is necessary to repeat the redshift calibration for the KiDS-1000-v2 catalogue, even though our primary focus is to improve shear calibration. To quantify the changes in galaxy samples introduced by the modifications in shape measurements from the KiDS-1000-v1 to KiDS-1000-v2 catalogues, we compared their effective number densities before applying any redshift calibration. The observed percentage differences in each tomographic bin, from low to high redshift bins, are -1.8%, -0.4%, 0.2%, 1.3%, and 3.2%. Here, negative values indicate a decrease in density from the v1 to the v2 catalogue, while positive values signify an increase. These differences are largely attributed to changes in the weighting scheme from lensfit version 309c to version 321, as well as the implementation of the new empirical correction scheme for PSF leakage, as discussed in Sect. <ref> and in . For this, we employed a methodology identical to the one used by <cit.>, <cit.> and <cit.> (vdB22 hereafter). It is based on a direct calibration method <cit.> implemented with a self-organising map (SOM, ). More information on our implementation is provided in Appendix <ref>, while <cit.>, <cit.> and offer more comprehensive discussions.
The SOM-based redshift calibration method uses a `gold selection' criterion to filter out sources that are not represented in the spectroscopic reference sample (see Appendix <ref>). However, this process influences shear biases as it alters the selection function of the final sample. To ensure a consistent estimation of shear biases, we created the SKiLLS-gold catalogue by mimicking this quality control on the SKiLLS mock catalogue, using the same SOM trained by the spectroscopic reference sample as the real data. We derived the appropriate shear bias correction factors from this SKiLLS-gold catalogue for individual tomographic bins, and present these values in Table <ref>. It is worth noting that the shear bias estimates presented in this work differ slightly from those in , which did not include the gold selection procedure. Despite this, the differences in the estimated shear biases are relatively minor across all tomographic bins, with the first tomographic bin showing the most noticeable change of 0.008.
Our fiducial results, m_ final, account for the impact of PSF modelling uncertainties and the `shear interplay' effect, which occurs when galaxies from different redshifts are blended together. For more details on these effects, we refer to and <cit.>. Additionally, we provide the idealised m_ raw results, which do not consider these higher-order effects. By comparing the cosmological constraints obtained from these two cases, we aim to evaluate the robustness of previous KiDS results with respect to these higher-order effects, which were not taken into account in the earlier shear calibration (, ).
§.§ Calibration uncertainties
Systematic uncertainties arising from redshift and shear calibrations can propagate into cosmological analyses, potentially leading to biased results. Therefore, it is crucial to adequately address these uncertainties in the analysis. In this section, we outline our approach to managing these calibration uncertainties.
The uncertainties in redshift calibration were addressed by introducing an offset parameter for the estimated mean redshift of galaxies in each tomographic bin. This offset parameter, described as correlated Gaussian priors, serves as a first-order correction to both the statistical and systematic uncertainties associated with redshift calibration. Table <ref> lists the exact values for these parameters, which we obtained from and <cit.>. They determined these prior values using spectroscopic and KiDS-like mock data generated by <cit.>. We consider the current priors to be conservative enough to account for any potential changes in the redshift biases from KiDS-1000-v1 to KiDS-1000-v2, given that both catalogues use the same photometric estimates. However, for the forthcoming KiDS-Legacy analysis, we plan to re-estimate these values based on the new SKiLLS mock data.
We improved our approach to handling uncertainties related to the shear calibration. In , nominal uncertainties were proposed for each tomographic bin based on sensitivity analyses. This aimed to ensure the robustness of the shear calibration within the specified uncertainties, but at the cost of reducing statistical power. In this work, we aim to improve this approach by separately accounting for the statistical and systematic uncertainties within the shear calibration.
The statistical uncertainties, as presented in Table <ref>, are computed directly from simulations and are limited only by the volume of the simulations, which can be increased with more computing resources[However, the finite volume of the input galaxy sample prevents an indefinite increase.]. These uncertainties are also easily propagated into the covariance matrix for cosmological inference. Although increasing the simulation volume could, in principle, reduce these uncertainties, we found that the current values already comfortably meet the KiDS requirements, thus further efforts in this direction were considered not necessary.
If the SKiLLS simulations perfectly match KiDS data, these statistical uncertainties would be the only contribution to the final uncertainty from the shear calibration. However, since our simulations are not a perfect replica of the real observations, residual shear biases may still be present in the data even after calibration. These biases, referred to as systematic uncertainties, are typically the primary source of error in shear calibration. Increasing the simulation volume cannot improve these uncertainties as they are determined by the realism of the image simulations. The level of these uncertainties can only be roughly estimated through sensitivity analyses.
Since the systematic residual shear biases directly scale the data vector, accurately quantifying their impact using the covariance matrix is challenging. Therefore, we use a forward modelling approach to capture the impact of these systematic uncertainties. Instead of incorporating these uncertainties into the covariance matrix, we examine how the final estimates of the cosmological parameters change due to the shift in signals caused by the systematic residual shear biases. This forward modelling approach can be easily implemented using simple optimisation algorithms since the shift is small, and the covariance remains unchanged. More details on how to determine residual shear biases and implement the forward modelling approach are provided in Appendix <ref>.
§ COSMOLOGICAL INFERENCE
The cosmological inference in this study largely aligns with the approach used in the KiDS-1000-v1 analyses (; ), with minor modifications primarily influenced by the recent joint DES Y3+KiDS-1000 cosmic shear analysis (). In this section, we outline the configurations and reasoning behind these choices in our fiducial analysis. For certain notable changes, we also conduct extended analysis runs with different configurations to evaluate the impact of these modifications. Our analysis code is publicly accessible[<https://github.com/lshuns/CSK1000LF321>].
Our code builds upon the [<https://github.com/KiDS-WL/Cat_to_Obs_K1000_P1>] and the KiDS Cosmology Analysis Pipeline (KCAP)[<https://github.com/KiDS-WL/kcap>] infrastructure, as developed in <cit.>, <cit.>, , <cit.> and <cit.>. The pipeline converts KiDS shear and redshift measurements into various second-order statistics, with the assistance of the TreeCorr code <cit.>. Meanwhile, KCAP estimates cosmological parameters using the CosmoSIS framework, which bridges the likelihood function calculation pipelines with MCMC samplers <cit.>.
We measure the shear field using Complete Orthogonal Sets of E/B-Integrals (COSEBIs, ). As reported by <cit.>, COSEBIs offer enhanced robustness against small-scale effects on the shear power spectrum, which primarily stem from complex baryon feedback. Furthermore, we account for baryon feedback when modelling the matter-matter power spectrum, using hmcode-2020 <cit.> within the camb framework with the version 1.4.0 <cit.>.
hmcode-2020, an updated version of hmcode <cit.>, models the non-linear matter-matter power spectrum, incorporating the influence of baryon feedback through an enhanced halo-model formalism. This updated model is empirically calibrated using hydrodynamical simulations, following a more physically informed approach. Unlike its predecessor calibrated with OWLS hydrodynamical simulations <cit.>, this newer version uses the updated BAHAMAS hydrodynamical simulations for calibration <cit.>. These simulations, in turn, are calibrated to reproduce the observed galaxy stellar mass function and the hot gas mass fractions of groups and clusters. This calibration ensures that the simulation accurately reflects the impact of feedback on the overall distribution of matter (refer to for further details). Furthermore, hmcode-2020 improves the modelling of baryon-acoustic oscillation damping and massive neutrino treatment, achieving an improved accuracy of 2.5% (compared to the previous version's 5%) for scales k<10h Mpc^-1 and redshifts z<2 <cit.>.
The model incorporates a single-parameter variant, T_ AGN, representing the heating temperature of active galactic nuclei (AGN). Higher T_ AGN values correspond to more intense AGN feedback, leading to a lower observed matter power spectrum. Following , we use a uniform prior on log_10(T_ AGN) ranging from 7.3 to 8.0. This choice is motivated by the findings from the BAHAMAS hydrodynamical simulations <cit.>.
Given the characteristics of COSEBIs and the implementation of the hmcode, the KiDS-1000-v1 analyses included small-scale measurements down to θ_ min=0 5. This strategy was, however, re-evaluated in , which suggested more stringent scale cuts for the KiDS COSEBIs data vector, determined by the baryon feedback mitigation strategy proposed by <cit.>.
Following this recommendation, we apply a scale cut of θ_ min=2 in our fiducial analysis.
We use the non-linear linear alignment (NLA) model to describe the intrinsic alignment (IA) of galaxies. This model combines the linear alignment model with a non-linear power spectrum and contains a single free parameter A_ IA to describe the amplitude of IA signals <cit.>. It is also common to include a power law, with an index denoted as η_ IA, to capture potential redshift evolution of the IA strength. To distinguish it from the redshift-independent NLA model, we refer to this variant as the NLA-z model.
In line with previous KiDS analyses, we take the redshift-independent NLA model as our fiducial choice since introducing η_ IA has a minimal effect on the primary S_8 constraint (), and current direct observations of IA signals show little evidence of substantial redshift evolution (e.g., ). However, <cit.> suggests that the selection of galaxy samples resulting from the redshift binning may introduce a detectable redshift variation in the IA signal, although its impact remains negligible for current weak lensing analyses. To assess the impact of η_ IA on our results, we perform an extended run using the NLA-z model, following the same prior selection as in .
The KiDS-1000-v1 analyses adopted a broad and uninformative prior for A_ IA, ranging from [-6, 6], considering that the data can constrain it and that an incorrect informative prior could bias the final cosmological results. Although uncertainties regarding IA signals remain large, recent developments in the field have improved our knowledge of the expected IA signal strength. For instance, <cit.> used a halo model formalism, incorporating results from the latest direct IA measurements, and predicted A_ IA=0.44± 0.13 for the redshift-independent NLA model targeted for KiDS-like mixed-colour lensing samples[<cit.> also examined the NLA-z model under similar conditions, but found the fits were predominantly driven by the low-redshift bins, resulting in less accurate recovery of large-scale alignments at high redshifts.]. This prediction aligns well with the constraints from recent cosmic shear analyses (; ). Moreover, recent studies revealed that other nuisance parameters in such analyses, especially those related to redshift calibration uncertainties, can result in misleading A_ IA values <cit.>.
Given these considerations, we consider it is necessary to explore the prior for the A_ IA parameter. As an initial step towards a fully informed A_ IA approach, we begin by simply narrowing the previously broad prior, leaving a more comprehensive exploration of the IA model setups for the forthcoming KiDS-Legacy analysis. In our fiducial analysis, we choose a flat yet narrower prior of [-0.2, 1.1], which corresponds to the 5σ credible region of predictions by <cit.>. We note that our new prior will not significantly impact the sampling results, provided that the final posterior distributions fall within the set prior range. For comparison purposes, we also conduct a test run using the wider [-6, 6] prior.
Sampling the high-dimensional posterior distribution is a challenging task. In the KiDS-1000-v1 analyses, an ellipsoidal nested sampling algorithm, MultiNest <cit.>, was used. However, recent studies demonstrated that MultiNest systematically underestimates the 68% credible intervals for S_8 by about 10% in current weak lensing analyses (; ; ). A promising alternative is the sliced nested sampling algorithm, PolyChord <cit.>, which provides more accurate estimates of parameter uncertainties. However, PolyChord is almost five times slower than MultiNest. As a result, we opt to use PolyChord for our main analysis, while retaining MultiNest for our testing purposes.
When presenting point estimates and associated uncertainties for parameter constraints, we adhere to the recommendations of <cit.>. We derive our best-fit point estimates from the parameter values at the maximum of the joint posterior (MAP). Given that the MAP reported by the sampling code can be affected by noise due to the finite number of samples, we enhance the precision of the MAP by conducting an additional local optimisation step. This process initiates from the MAP reported by the sampling code and utilises the Nelder-Mead minimisation method <cit.>, a method also employed by . To represent uncertainties linked to these estimates, we compute the 68% credible interval based on the joint, multi-dimensional highest posterior density region, projected onto the marginal posterior of the parameter of interest (PJ-HPD). This hybrid approach is more robust against projection effects stemming from high-dimensional asymmetric posterior distributions than traditional 1D marginal summary statistics (refer to Sect. 6 in for a comprehensive discussion). To facilitate comparison with results from other surveys, we also provide constraints based on the traditional mean and maximum of the 1D marginal posterior, along with their respective 68% credible intervals.
It is worth noting that, as systematic uncertainties from shear calibration are excluded in the construction of our covariance matrix (see Sect. <ref>), the uncertainties derived from the main sampling chains do not fully account for the true uncertainties. To compensate for the additional uncertainties arising from residual shear biases, we employ a forward modelling approach. This method involves shifting the data vector and subsequently the likelihood, based on the estimated residual shear biases, followed by recalculating the MAP. As the adjustment is minor and the covariance matrix remains static, it is not necessary to re-sample the posterior distribution. Instead, we simply need to repeat the previously mentioned local optimisation step. Starting with the original MAP and using the updated likelihood, we can determine the new MAP corresponding to each shift in the data vector. The variation in these MAP estimates represents additional uncertainties introduced by the systematic uncertainties arising from shear calibration. Further details on this process can be found in Appendix <ref>.
Table <ref> summarises the model parameters and their priors as used in our fiducial analysis. These parameters can be broadly classified into two categories: the first category includes five cosmological parameters, which describe the spatially flat ΛCDM model we employ. We fix the sum of the neutrino masses to a value of 0.06 eV c^-2, where c is the speed of light. This choice is based on <cit.>'s finding of the negligible influence of neutrinos on cosmic shear analyses. The second category encompasses three nuisance parameters, accounting for astrophysical and measurement uncertainties as previously discussed. We note that all parameters, with the exception of T_ AGN and A_ IA, retain the same priors as those used in the KiDS-1000-v1 cosmic shear analyses. The T_ AGN parameter replaces the previous baryon feedback amplitude parameter associated with the preceding version of hmcode, while the A_ IA parameter adopts a narrower prior for reasons previously discussed.
§ RESULTS
In this section, we present our cosmological parameter constraints and evaluate the robustness of our findings against a variety of systematic uncertainties. We begin by presenting the outcomes from our fiducial analysis in Sect. <ref>. We then assess the impact of shear biases in Sect. <ref>, by quantifying the shifts in final constraints resulting from different shear bias scenarios. This highlights the main development of our work. Additionally, since we implemented several changes to the cosmological inference pipeline, we evaluate the effects of these adjustments by comparing results from multiple setup variations in Sect. <ref>.
§.§ Fiducial analysis results
Our fiducial model has a total of twelve free parameters: five are cosmological parameters specifying the spatially flat ΛCDM model with a fixed total neutrino mass, and the remaining seven are nuisance parameters addressing astrophysical and redshift calibration uncertainties, as detailed in Sect. <ref>. However, not all of these parameter are constrained by the cosmic shear analysis. In this section, we focus on the primary parameters that our analysis constrains. Meanwhile, the posterior distributions for all free parameters are displayed as contour plots in Appendix <ref> for reference.
Table <ref> provides the point estimates along with their corresponding 68% credible intervals for the primary parameter as constrained by our fiducial analysis using the PolyChord sampling code. We display results using three summary statistics: MAP and PJ-HPD, the mean of the 1D marginal posterior, and the maximum of the 1D marginal. As discussed in , each of these approaches has its own advantages and limitations. Specifically, the accurate determination of MAP and PJ-HPD can be challenging, while marginal constraints for multi-dimensional posteriors are prone to projection effects. Aligning with the KiDS convention, we choose the MAP and PJ-HPD constraints as our headline results, but caution against direct comparisons with results from other surveys that might use different summary statistics. The uncertainties we report include additional contributions from the systematic uncertainties associated with our shear calibration, as detailed in Sect. <ref>. These additional uncertainties are overall small compared to the main sampling uncertainties, so when plotting the posterior distributions or conducting extended runs for test purposes, we do not incorporate these uncertainties.
Figure <ref> shows the projected 2D posterior distributions for the parameters Ω_ m and S_8, as derived from our fiducial setups employing PolyChord and MultiNest. We see that MultiNest results yield a roughly 10% narrower width of the posterior distribution compared to PolyChord, aligning with previous findings (; ; ). However, as expected, the results from the two sampling codes show consistency in terms of best-fit values. In addition, we compare these results with those from the cosmic microwave background (CMB) analysis by the Planck satellite, using their baseline ΛCDM chains with the likelihood from their most recent Planck-2018 results <cit.>. An offset is evident between our cosmic shear results and those from Planck-2018. Adopting the Hellinger distance tension metric (; ; ), we detected a 2.35σ tension in the constrained S_8 values. For the constrained parameter set (S_8, Ω_ m), a similar level of tension, 2.30σ, was found using the Monte Carlo exact parameter shift method (; ).
Figure <ref> presents our primary S_8 constraints and compares them with those from other contemporary cosmic shear surveys and the Planck CMB analysis. For ease of comparison, we show all three summary statistics for our fiducial results, while for other surveys, we display their headline values, as per their preferred summary statistics. Overall, our results align well with those from all major contemporary cosmic shear surveys.
We note that our fiducial analysis pipeline is similar to the Hybrid pipeline with one notable difference: while included a free neutrino parameter, we kept the total neutrino mass fixed. showed that this additional degree of freedom in the cosmological parameter space can slightly increase the projected marginal S_8 values relative to an analysis with a fixed neutrino mass. However, since we refer to their MAP & PJ-HPD results in Fig. <ref>, the comparison should not be influenced by these projection effects (for more details, refer to the discussion in ).
It is interesting to note that our fiducial results align almost identically with the KiDS-1000-v1 re-analysis conducted by , which used the redshift calibration. This alignment arises from a balance of several effects in our analysis. Our improved shear calibration tends to increase S_8, while the enhanced redshift calibration tends to lower it. Moreover, our analysis does not show a significant increase in S_8 when introducing scale cuts, as seen in the KiDS-1000-v1 Hybrid analysis. This helps reconcile the minor difference between our results and those of . We explore these changes in more detail in the following sections.
§.§ Impact of shear biases
The primary aims of this study are to assess the impact of higher-order shear biases on the final parameter constraints and to develop a methodology for effectively addressing shear calibration uncertainties. Both of these aims can be achieved by examining the shifts in the constrained cosmological parameters resulting from different shear bias scenarios. As discussed in Sect. <ref> and Appendix <ref>, the residual shear biases have only a minor effect on the measured data vector. This allows us to determine the shifts in the best-fit values of the constrained parameters using a local minimisation algorithm, such as the Nelder-Mead method <cit.>. These shifts in the best-fit values indicate the additional uncertainties stemming from systematic uncertainties in shear calibration.
Figure <ref> shows shifts in our primary S_8 constraints for different residual shear bias scenarios. For comparison, we also include a shaded region denoting different levels of PJ-HPD credible intervals, as derived from our fiducial PolyChord chain. Apart from the extreme case where no shear calibration is applied, all other residual shear bias scenarios result in shifts less than 10 per cent of the initial sampling uncertainties. Notably, neglecting the higher-order correction for the shear-interplay effect and uncertainties in PSF modelling results in a negligible shift of only -0.03σ. This finding reinforces the reliability of previous KiDS cosmic shear analyses, which did not consider these higher-order effects.
The S_8 shifts, resulting from the input morphology test simulations, indicate additional systematic uncertainties within our shear bias calibration. The generation of these test simulations is detailed in Appendix <ref>. Briefly, we generated six sets of test simulations, where the input values of three morphological parameters of the adopted Sérsic profile - the half-light radius (labelled as `size' in the figure), axis ratio (labelled as `q'), and Sérsic index (labelled as `n') - were shifted up and down. We observe that shifts in the input galaxy axis ratio lead to the most significant changes in S_8: a -0.10σ shift for increased input axis ratio and a +0.06σ shift for decreased input axis ratio. This behaviour aligns with our expectations for the lensfit code employed in our analysis. As it incorporates prior information on measured galaxy ellipticities during its Bayesian fitting process, it is more sensitive to changes in the distributions of sample ellipticities.
These S_8 shifts, obtained from the test simulations, provide a quantitative measure of the potential impact of inaccuracies in the input morphology and the sensitivity of the lensfit code to the underlying sample morphology distributions. When presenting the S_8 constraints, we account for these systematic uncertainties by including the maximum shifts into the reported uncertainties. In other words, we consider the shifts corresponding to the changes in input axis ratio (represented as dashed lines in Fig. <ref>), from the six sets of test simulations, as additional systematic uncertainties. These are reported alongside the original statistical uncertainties from the main sampling chain. It should be noted that these additional systematic uncertainties are specific to the SKiLLS image simulations and the lensfit shape measurement code used in our analysis. To reduce these uncertainties, future advancements in shear measurements should focus on improving the realism of image simulations and enhancing the robustness of the shear measurement algorithm.
§.§ Impact of altering inference setups
Although our main updates revolve around the shear measurement and calibration, we have also implemented several modifications to the cosmological inference pipeline, drawing upon recent developments from . As such, it is beneficial to conduct some extended runs with various setup configurations.
For these test runs, we employ MultiNest as our sampling code, as it operates approximately five times faster than PolyChord, but at the cost of underestimating the width of the posterior distributions and thus the reported uncertainties by about 10%. However, the best-fit values from MultiNest are not biased (as evident in Fig. <ref>). Thus, comparisons made using MultiNest will yield conservative but unbiased results.
§.§.§ Priors for the NLA model
We begin by testing the prior for the NLA model. As discussed in Sect. <ref>, our fiducial analysis implemented a redshift-independent NLA model with a narrow flat prior for the amplitude parameter A_ IA. This model, motivated by the work of <cit.>, serves as an alternative to the uninformative broad prior previously used. To investigate the impact of this change on our final results, we performed two additional runs: one employing a redshift-independent NLA model with a broad A_ IA prior ranging from [-6, 6], in line with KiDS-1000-v1 analyses, and another allowing for a redshift-dependent IA amplitude, i.e., the NLA-z variant. The redshift evolution is modelled using a power-law of the form [(1+z)/(1+0.62)]^η_ IA, with priors of [-5, 5] for both A_ IA and η_ IA, in line with .
Figure <ref> presents a comparison of the posterior distributions obtained from the different NLA prior setups, and Table <ref> lists the point estimates for the critical S_8 parameter. We see consistent constraints on S_8 across all setups. The constrained A_ IA under our narrower prior setup also aligns with those from the broad priors, albeit spanning a narrower range due to the constrained prior range, validating the prior range used in our fiducial analysis. Additionally, we observe that the η_ IA parameter is not constrained by the data, suggesting that the use of the NLA-z model may not be necessary for current weak lensing analyses.
§.§.§ Different scale cuts
In our fiducial analysis, we adopted a scale cut for the measured data vectors, ranging from 2 to 300, as suggested by . This is a change from the KiDS-1000-v1 analyses, which used a range of 0 5<θ<300. A re-analysis of KiDS-1000-v1 with this new scale cut by led to a 0.7-0.8σ increase in the S_8 constraint. Using mock
analyses, they found that this offset could arise from noise fluctuations 23% of the time.
In light of the updates to our shear measurement, we revisited this test. Interestingly, we observed a smaller difference between the two scale cuts than what was reported by . Specifically, we observed shifts of -0.17σ, -0.40σ, and -0.31σ, corresponding to the MAP & PJ-HPD, mean marginal, and maximum marginal summary statistics, respectively (refer to Table <ref> for exact values).
We attribute this increased robustness against small scale fluctuations to our improved empirical corrections of the PSF leakages into shear measurement. This is supported by Figs. <ref> and <ref>, where we see that the shear signals measured from the KiDS-1000-v2 catalogues exhibit overall smaller systematic errors. We note that <cit.> performed a mock test using the two-point correlation function and identified a change of less than 0.1σ in the S_8 constraints when the detected PSF residuals were incorporated into the KiDS-1000-v1 mock data. Nevertheless, it is plausible that these systematic effects have a more significant influence on COSEBIs, given their use of more sophisticated weighting functions <cit.>. To quantify the improvements brought about by the updated shear measurements regarding the robustness of the COSEBIs, a similar mock analysis based on the COSEBIs statistic is warranted. We consider this an important topic for future study. For the current analysis, the test results simply affirm the robustness of our primary S_8 constraints.
§.§.§ KiDS-1000-v1 setups
To draw a direct comparison with the KiDS-1000-v1 results and evaluate the impact of our improved shear measurements and calibration, we performed a test run using the same inference pipeline and parameter priors as in the KiDS-1000-v1 analyses conducted by and . The differences compared to our fiducial analysis setup include: measurements from scales of 0 5 to 300, use of the older version of hmcode, sampling with the MultiNest code, and a broad A_ IA prior ranging from [-6, 6]. As shown in Fig. <ref>, our test results are well-aligned with the outcomes of the analyses by and . Notably, our new results show an increase in the S_8 value relative to , bringing it closer to the result obtained by .
We re-emphasise that our redshift calibration aligns with that of , who expanded the redshift calibration sample to more than double the size used by (see Appendix <ref> for details). This means that our redshift-related selection function closely mirrors that used in the sample. However, due to changes in the weighting and selection scheme between the KiDS-1000-v2 catalogue and the KiDS-1000-v1 catalogue, our sample cannot be considered as directly comparable to theirs.
To provide a more quantitative understanding of the sample differences among the three analyses, we compared the effective number density of the source sample in our analysis to those used in and . The differences for each tomographic bin are 9.6%, 9.8%, 6.1%, 10.6%, and 2.8% when compared to ; and -1.8%, -1.3%, -0.7%, 0.7%, and 3% when compared to . Here, positive values signify an increase, while negative values denote a decrease. The differences between our catalogue and that of stem from both shear measurement and redshift calibration, whereas the difference between ours and that of arises mainly from the shear measurement, as we used the same SOM for the `gold' selection (see Appendix <ref>). As such, comparing our results directly with those of can provide clearer insights into the impact of our improvements in shear measurements. It is also worth noting that the increased effective number density in high redshift bins compared to is largely due to the increased weighting of faint objects in the updated version of lensfit code. However, this comes at the cost of increased sample ellipticity dispersion, with a maximum increase of 6% found in the fifth bin. These subtle differences in the source catalogues change the noise properties of the samples. Consequently, even with perfect calibration in each study, we would not expect to derive identical cosmological constraints from each analysis.
§ SUMMARY
We conducted a cosmic shear analysis using the KiDS-1000-v2 catalogue, which is an updated version of the public KiDS-1000(-v1) catalogue with respect to shear measurements and calibration. Under the assumption of a spatially flat ΛCDM cosmological model, we derived constraints on S_8=0.776_-0.027-0.003^+0.029+0.002 based on the MAP & PJ-HPD summary statistics. The second set of uncertainties were incorporated to account for the systematic uncertainties within our shear calibration. The mean-marginal and maximum-marginal values obtained from the same sampling chain are 0.765_-0.023^+0.029 and 0.769_-0.029^+0.027, respectively. Our results are consistent with earlier results from KiDS-1000-v1 and other contemporary weak lensing surveys, but show a ∼2.3σ level of tension with the Planck cosmic microwave background constraints.
The main improvements in our analysis, relative to the KiDS-1000-v1 cosmic shear analyses, are attributed to the enhanced cosmic shear measurement and calibration. These enhancements were achieved through the updated version of the lensfit shape measurement code, a new empirical correction scheme for PSF contamination, and the newly developed SKiLLS multi-band image simulations, as detailed in . We verified the reliability of the new measurement via a series of catalogue-level null tests proposed by <cit.>. The results indicate that the KiDS-1000-v2 catalogue shows overall better control over measurement systematics compared to the KiDS-1000-v1 catalogues. This improvement in reducing measurement systematics assists in reducing noise in small scale measurements, thereby enhancing the robustness of our cosmological parameter constraints against varying scale cut choices.
Our methodology for shear calibration largely aligns with the one detailed in , where we account for higher-order blending effects that arise when galaxies from different redshifts are blended, as well as the uncertainties in PSF modelling. However, when comparing the outcomes from the shear calibration with and without these higher-order adjustments, we found that these effects have a negligible impact on the present weak lensing analysis, a conclusion that is in line with the findings of <cit.>.
We recommend treating the statistical and systematic uncertainties from the shear calibration separately, given their distinct origins. The statistical uncertainties, which are determined by the simulation volume, can be reduced and are readily incorporated into the covariance matrix used for cosmological inference. On the other hand, systematic uncertainties, associated with the realism of image simulations and sensitivity of the shape measurement algorithm, can be more effectively addressed when considered as residual shear biases post-calibration. Assuming these residual shear biases are small, a forward modelling approach, combined with a local minimisation method, can be used to estimate their impact on the final parameter constraints. In our analysis, these additional systematic uncertainties contribute roughly 8% of the final uncertainty on S_8. However, ongoing efforts to enhance shear measurement and calibration, such as increasing the realism of image simulations through Monte-Carlo Control Loops <cit.> and leveraging new techniques like Metacalibration/Metadetection <cit.> to improve measurement robustness against underlying sample properties, may well lead to a reduction in these additional systematic uncertainties.
In our fiducial analysis, we opted for a redshift-independent NLA model with a narrow flat prior for the IA amplitude parameter, A_ IA, motivated by the work of <cit.>. However, we also investigated two alternative scenarios: one with a broad A_ IA prior for the redshift-independent NLA model, echoing the KiDS-1000-v1 analysis by , and the other, the NLA-z variant, allowing for redshift evolution of the IA amplitude, as per the recent joint DES Y3+KiDS-1000 cosmic shear analysis (). In all three scenarios, we found fully consistent constraints for S_8 and A_ IA, indicating that the impact of the variations among these scenarios is negligible. To better understand the IA signals and their impact on cosmic shear analyses, future tests need to implement more substantial variations in IA models, for instance, the halo model formalism introduced by <cit.>. Such exploration would not only enhance our understanding of the measured IA signals, but also help mitigate correlations between nuisance parameters, thereby improving the precision of future cosmic shear analyses.
We acknowledge support from: the Netherlands Research School for Astronomy (SSL); the Netherlands Organisation for Scientific Research (NWO) under Vici grant 639.043.512 (HHo); the Royal Society and Imperial College (KK); the Polish National Science Center through grants no. 2020/38/E/ST9/00395, 2018/30/E/ST9/00698, 2018/31/G/ST9/03388 and 2020/39/B/ST9/03494 (MB); the Polish Ministry of Science and Higher Education through grant DIR/WK/2018/12 (MB); the Royal Society through an Enhancement Award (RGF/EA/181006) and the Royal Society of Edinburgh for support through the Saltire Early Career Fellowship with ref. number 1914 (BG); the European Research Council (ERC) under Grant number 647112 (CH) and Consolidator Grant number 770935 (HHi, JLvdB, AHW, RR); the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research (CH); the UK Science and Technology Facilities Council (STFC) under grant ST/V000594/1 (CH), ST/V000780/1 (BJ) and ST/N000919/1 (LM); Heisenberg grant of the Deutsche Forschungsgemeinschaft grant Hi 1495/5-1 (HHi); CMS-CSST-2021-A01 and CMS-CSST-2021-A04, NSFC of China under grant 11973070 (HYS); Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7013 (HYS); Program of Shanghai Academic/Technology Research Leader (HYS). The results in this paper are based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs: 088.D-4013, 092.A-0176, 092.D-0370, 094.D-0417, 177.A-3016, 177.A3017, 177.A-3018 and 179.A-2004, and on data products produced by the KiDS consortium. The KiDS production team acknowledges support from: Deutsche Forschungsgemeinschaft, ERC, NOVA and NWO-M grants; Target; the University of Padova, and the University Federico II (Naples). Contributions to the data processing for VIKING were made by the VISTA Data Flow System at CASU, Cambridge and WFAU, Edinburgh. Author contributions: All authors contributed to the development and writing of this paper. The authorship list is given in three groups: the lead authors (SSL, HHo, KK) followed by two alphabetical groups. The first alphabetical group includes those who are key contributors to both the scientific analysis and the data products. The second group covers those who have either made a significant contribution to the data products, or to the scientific analysis.
aa
§ SHEAR BIAS IN TWO-POINT STATISTICS
When calibrating the shear measurements in the two-point correlation function, it is usually assumed that the correlations involving the shear bias can be ignored, which includes correlations between different tomographic and spatial angular bins. This simplification leads to the following relationship between the true correlation function of cosmic shear in tomographic bins i,j, denoted as ξ^ij, and the measured signal ξ̂^ij:
ξ̂^ij(θ) = ⟨γ̂^i(θ') γ̂^j(θ'+θ)⟩
= ⟨[1+m^i(θ')] [1+m^j(θ'+θ)] γ^i(θ') γ^j(θ'+θ)⟩
= ⟨[1+m^i(θ')] [1+m^j(θ'+θ)]⟩ ξ^ij(θ)
≃ (1+m^i) (1+m^j) ξ^ij(θ) ,
where m^i is estimated by averaging over all sources in a given tomographic bin i, and we use ⟨·⟩ to denote the correlation function. We also assumed that the shear bias is independent of the underlying shear to simplify the equation. The result of Eq. (<ref>) allows us to average the multiplicative biases over all the galaxies in a given tomographic bin to mitigate the individual noisy bias estimation.
However, in principle, the shear bias can be scale dependent due to spatial fluctuations in source density (e.g. ). With SKiLLS, we can directly examine these correlations by measuring the shear bias in the two-point estimators. We measured the shear correlation function in the SKiLLS mock catalogue using Eq. (<ref>). Since we know the true ξ^ij_+(θ)=γ_ input^2 in simulations, where γ_ input is the amplitude of the constant input shear, we can estimate the shear bias in the two-point correlation function directly by comparing the measured ξ̂^ij_+ to the input ξ^ij_+ following Eq. (<ref>).
Figure <ref> shows the difference between the shear biases with and without considering its correlations, defined as Δ m_ξ≡⟨[1+m^i(θ')] [1+m^j(θ'+θ)]⟩ - (1+m^i) (1+m^j). It shows that the difference is negligible across all scales and tomographic bins, in agreement with the statistical uncertainties of our shear calibration, which are represented by the shaded regions. These findings confirm that we can neglect the correlations between shear biases in the current KiDS weak lensing analysis.
§ SOM REDSHIFT CALIBRATION
This appendix provides information on the redshift calibration reference sample and SOM configurations used in our analysis. For a more comprehensive overview and validation of the SOM redshift calibration method in the KiDS analysis, we refer to <cit.>, <cit.> and .
We employed the fiducial spectroscopic sample described in as our calibration reference sample. This sample comprises spectroscopic redshift estimates (spec-zs) from various spectroscopic surveys that overlap with KiDS fields, enabling us to assign KiDS photometric measurements to objects in the reference sample. In cases where an object had multiple spectroscopic measurements, defined a specific hierarchy to select the most reliable redshift estimates based on the quality of the measurements. For further details on the adopted spectroscopic samples and the compilation procedure, readers are referred to Appendix A of .
For our calibration, we used a 101×101 hexagonal SOM trained on the r-band magnitude and 36 colours derived from the PSF-matched, list-driven nine-band ugriZYJHK_s photometry from the KiDS+VIKING surveys. This SOM is identical to the fiducial SOM constructed in . We segregated the reference and target samples into the trained SOM cells separately for each tomographic bin, allowing us to create comparable groupings between the spectroscopic and photometric sources in each bin. During this process, we further categorised the original SOM cells using a hierarchical cluster analysis implemented by the `hclust' function within the R Stats Package[<https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/hclust>] to increase the number of galaxies per grouping. We adopted the same number of clusters per bin (4000, 2200, 2800, 4200, and 2000) as , who determined these numbers using simulations produced by <cit.>.
To mitigate the effects of photometric noise and the incompleteness of the reference sample, we applied an additional selection step to the SOM groupings. We excluded any grouping where the mean spectroscopic redshift of the reference sample z_ spec and the mean photometric redshift of the target sample z_ B exhibited a significant discrepancy, defined as |z_ spec - z_ B| > 5σ_ mad. Here, σ_ mad represents the normalised median absolute deviation of all SOM groupings, which we calculated to be 0.122 in our case. This step allowed us to define the KiDS `gold' sample, which we used to compute the redshift distributions and perform the cosmic shear analysis.
§ SYSTEMATIC UNCERTAINTIES FROM THE SHEAR CALIBRATION
In this appendix, we outline our approach to address the systematic uncertainties arising from shear calibration. Our methodology involves two primary steps: In Sect. <ref>, we quantify the potential residual biases after implementing our simulation-based shear calibration. In Sect. <ref>, we propagate these systematic uncertainties into the final uncertainties of the estimated cosmological parameters.
We propose a separate accounting of the shear calibration uncertainties, as it is considered more accurate and informative than the traditional approach, which uses nominal shear calibration uncertainties that are deliberately overestimated to encompass potential systematic uncertainties arising from shear calibration. Our approach clearly illustrates the extent to which the final cosmological parameters of interest are influenced by these systematic uncertainties from shear calibration.
Furthermore, as mentioned in Sect. <ref>, these systematic uncertainties have fundamentally different origins from the statistical uncertainties incorporated in the covariance matrix. They represent the fundamental limitations of current simulation-based shear calibration methods. The limitations inherent in these systematic uncertainties cannot be eliminated by merely increasing the scale of image simulations. However they can be mitigated by empirically enhancing the realism of the image simulations, for example, using the Monte-Carlo Control Loop method <cit.>, or by improving the robustness of the shear measurement algorithm, such as the Metacalibration/Metadetection method <cit.>.
§.§ Quantifying residual shear biases with sensitivity analysis
Residual biases may persist after simulation-based shear calibration due to imperfect alignment between simulations and data, as elucidated by . These discrepancies pose challenges for shear calibration methods dependent on image simulations and underscore the need for re-weighting simulations to more closely align with the data. However, given that intrinsic galaxy properties in real data are unknown, this re-weighting process relies on noisy measured properties, rendering it vulnerable to calibration selection biases as discussed by . The uncertainties linked with the measured properties cause galaxies to be intermixed among defined bins, leading to the up-weighting or down-weighting of certain galaxies. As a result, even if the re-weighted sample aligns with the data in terms of the distribution of measured properties, it does not ensure identicality in terms of intrinsic properties. In other words, shear biases can still vary between two samples with identical distributions of apparent measured properties. Our aim is to quantify these residual biases and incorporate them into the final uncertainties of cosmological parameters.
The SKiLLS multi-band image simulations used in this analysis incorporate several enhancements, informed by insights gathered from previous KiDS simulation studies (, ). These improvements include: reproducing variations in star density, PSF, and noise background across the KiDS footprint; incorporating faint galaxies down to an r-band magnitude of 27 to account for correlated noise from undetected objects (e.g. ); including realistic clustering from N-body simulations to address blending effects (e.g. ); and adopting an end-to-end approach for photo-z estimation to account for photo-z measurement uncertainties. These improvements augment the robustness of the shear biases estimated from SKiLLS against various observational conditions.
In an investigation on the propagation of observational biases in shear surveys, <cit.> demonstrated that the measured shear power spectrum is, to first order, predominantly influenced by the mean of the multiplicative bias field across a survey. This suggests that if the shear bias estimated from simulations accurately reflects the mean value of the targeted sample, the shear calibration will be robust enough for KiDS-like cosmic shear analyses. Therefore, we conclude that potential residual biases related to observational conditions have negligible influence on our shear calibration, and we focus on systematic uncertainties arising from galaxy morphology uncertainties, specifically the assumed Sérsic profile and its parameters derived from Hubble Space Telescope observations <cit.>. For a model-fitting shape measurement code like lensfit, these galaxy morphology uncertainties are the main sources of residual shear biases after implementing the simulation-based shear calibration.
The deviation from the Sérsic profile is challenging to address for the current SKiLLS simulations, as our copula-based learning algorithm requires a parameterised model for its application. However, the Sérsic model has been validated as sufficient for KiDS-like analyses by , who used the same morphology catalogue as our work. Thus, we focus on the measurement uncertainties of the Sérsic parameters: half-light radius, axis ratio, and Sérsic index. We first examined the fitting uncertainties reported by <cit.> to assess the accuracy of these parameters in our input catalogue. We found that the median relative uncertainties for these parameters are a smooth function of galaxy magnitude, as shown in the top panels of Fig. <ref>. This allows us to capture these correlations through simple linear interpolation.
We interpreted these relative uncertainties as indicators of the systematic uncertainties in our input morphology. We assumed the most extreme scenarios, in which these measured statistical uncertainties are all caused by a coherent bias in the same direction. Consequently, we adjusted all galaxies in our input sample in the same direction, with the amplitude of the adjustment determined based on their r-band magnitude using a simple linear interpolation of the measured median correlations. We examined shifts towards both larger and smaller values and considered the three Sérsic parameters separately. This resulted in six test simulations corresponding to the six different sets of variations in input morphology parameter values. The input parameter distributions for these test simulations, as shown in the middle panels of Fig. <ref>, are compared to the distributions of the fiducial simulations. A clear shift of the entire distribution is evident, suggesting that our test simulations represent the most extreme scenarios in which the measured statistical uncertainties are coherently biased in the same direction, a situation that is unlikely in reality. Therefore, the residual biases we identified from these test simulations provide a conservative estimate.
We applied the same data analysis procedures to the test simulations as we did to the fiducial simulations, including shear and redshift estimates. We also followed the same re-weighting procedure for the test simulations as for the fiducial simulations, ensuring that the calibration selection biases are also captured. The differences in shear biases between these test simulations and our fiducial simulation are illustrated in the bottom panels of Fig. <ref>. The small differences indicate that the residual shear biases, after implementing our fiducial shear bias calibration, are insignificant.
§.§ Propagating residual shear biases with forward modelling
Accurately incorporating the systematic uncertainties from shear calibration into the covariance matrix presents a challenge, as residual shear biases directly scale the data vector, as shown in Eq. (<ref>). A more direct approach is to assess the shift in the measured shear signal caused by the residual shear biases and evaluate how these data vector shifts influence the constrained cosmological parameters. Given the minor residual shear biases illustrated in Fig. <ref> and the unchanged covariance, it is not necessary to reiterate the sampling of the posterior distributions for each shift. Instead, we can implement a local minimisation algorithm to find nearby best-fit values for each shift, using starting points from the fiducial sampling chain. The range of these new best-fit values, each associated with a shift, indicates the additional systematic uncertainties introduced by the residual shear biases.
This approach naturally integrates with our existing cosmological inference method, as outlined in Sect. <ref>, which already requires an additional local optimisation step to refine the best-fit values identified by the sampling code. We simply replicated this optimisation step, using the original best-fit value as the starting point and the shifted likelihood to determine the best-fit values associated with various alterations in measured signals. The variability in these test best-fit values provides an expanded credible region for the inferred parameters, thereby representing the systematic uncertainties from shear calibration. We included these additional uncertainties when presenting the point estimates of our primary parameters (see Sect. <ref> for details).
§ CONTOUR PLOTS FOR ALL FREE PARAMETERS
In this appendix, we provide two supplementary contour plots that display the posterior distributions of all twelve free parameters from our fiducial analyses, as produced by both the PolyChord and MultiNest sampling codes. The overall concordance between the results generated by PolyChord and MultiNest is evident.
|
http://arxiv.org/abs/2306.04581v2
|
20230607163352
|
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations
|
[
"Prithviraj Dasgupta"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"I.2.3"
] |
defcounter
thmcounter
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations
Prithviraj Dasgupta
Distributed Intelligent Systems Section
Information Technology Division
Naval Research Laboratory, Washington, D. C., USA
E-mail: [email protected]
June 2023
=========================================================================================================================================================================================
fancyfirst
Abstract
We consider the problem of learning to perform a task from demonstrations given by teachers or experts, when some of the experts' demonstrations might be adversarial and demonstrate an incorrect way to perform the task. We propose a novel technique that can identify parts of demonstrated trajectories that have not been significantly modified by the adversary and utilize them for learning, using temporally extended policies or options. We first define a trajectory divergence measure based on the spatial and temporal features of demonstrated trajectories to detect and discard parts of the trajectories that have been significantly modified by an adversarial expert, and could degrade the learner's performance, if used for learning, We then use an options-based algorithm that partitions trajectories and learns only from the parts of trajectories that have been determined as admissible. We provide theoretical results of our technique to show that repairing partial trajectories improves the sample efficiency of the demonstrations without degrading the learner's performance. We then evaluate the proposed algorithm for learning to play an Atari-like computer-based game called LunarLander in the presence of different types and degrees of adversarial attacks of demonstrated trajectories. Our experimental results show that our technique can identify adversarially modified parts of the demonstrated trajectories and successfully prevent the learning performance from degrading due to adversarial demonstrations.
§ INTRODUCTION
Learning from demonstrations is a widely-used form of machine learning where a teacher or expert provides demonstrations of how to perform the learning task to speed up the learning process <cit.> in the context of reinforcement learning <cit.>. It has been used in many successful applications of machine learning algorithms including autonomous driving <cit.>, robotic manipulation <cit.>, and human-robot interaction <cit.>. Conventionally, the experts demonstrating the task are assumed to be benign and show the correct way of performing the task. However, as machine learning-based autonomous systems become more pervasive, they are exposed to demonstrations from a variety of sources. Some of these demonstrations might be from adversarial experts that give incorrect demonstrations with the intention of making the autonomous system behave in incorrect and unintended ways. To address this problem, researchers have developed techniques for learning reliably in the presence of adversarial expert demonstrations <cit.>. The main idea in most of these techniques is to use an eligibility metric, such as a confidence measure, on trajectories or temporal sequences of state-action pairs representing expert demonstrations, followed by accepting or rejecting the trajectories based on that metric. These techniques work with full or end-to-end (initial state to final state) trajectories, that is, the eligibility metric is calculated for the full trajectory, and, if found ineligible, the full trajectory is discarded. In this paper, we posit that even though the full trajectory might cause the learning task to fail, there could be parts of the trajectory that were benign, possibly show a new way of performing a part of the task, and could benefit the learning process. This insight is based on the observation that many adversarial attacks on machine learning algorithms are composed by modifying the input (e.g., training data examples for supervised learning <cit.> or demonstrated trajectories for reinforcement learning <cit.>) only at certain, strategic features or locations, instead of all across the input. To address the problem of adversarial learning from demonstrated trajectories while retaining usable parts of the trajectories, we propose a novel technique using temporally extended policies or options <cit.>. Our technique consists of two steps: first, we develop a divergence measure that can indicate the degree of deviation in expert demonstrations with respect to a small set of demonstrations that are guaranteed to be benign. We then use options to partition demonstrated trajectories and use the divergence measure to selectively accept or discard parts of demonstrated trajectories. We have provided theoretical analyses to show that our proposed technique of accepting only non-adversarial portions of trajectories for learning can prevent degrading of the learner's performance. We have also validated the technique using different types and degrees of attacks made by an adversary while learning to play an Atari-like game called LunarLander using a form of learning from demonstrations called imitation learning. Our results show that our proposed technique can be used to identify and learn only from acceptable parts of demonstrated trajectories to improve the rewards from imitation learning in the presence of adversarial demonstrations. To the best of our knowledge, our work is one of the first attempts at integrating divergence measure with options to address the problem of adversarial learning from demonstrations.
The rest of this paper is structured as follows: in the next section, we provide an overview of relevant literature in adversarial reinforcement learning focusing on imitation learning. We then introduce the mathematical framework for the problem, measures for characterizing demonstrations given in the form of trajectories, and our option-based algorithm for partitioning and using acceptable parts of demonstrated trajectories for imitation learning. Sections <ref> and <ref> provide the theoretical and experimental evaluation results of our proposed techniques, respectively, and, finally, we conclude. A preliminary version of this research is in <cit.>. In this paper, we have thoroughly rewritten the paper, formalized the mathematical framework, proposed new algorithms, and added new theoretical and experimental results.
§ RELATED WORK
Adversarial learning has gained prominence over the past decade as an essential means to guarantee desired behavior of machine learning-based systems deployed in the real world. Here we discuss relevant literature on adversarial reinforcement learning (RL); a comprehensive survey of adversarial supervised learning is in <cit.>.
Early researchers considered adversarial RL in the context of an RL agent learning suitable actions to play a competitive game like keep-away soccer against a player called an adversary <cit.>, where the adversary's intent was to defeat the RL agent, albeit via fair play instead of using malicious tactics such as incorrect actions to misguide the RL agent. Subsequently, researchers proposed techniques where the expert demonstrator modifies the trajectory it demonstrates either indirectly or directly. In the former direction, researchers have considered including a risk term representing the demonstrator's possible deviations from optimal trajectories inside the Q-value function used by an RL agent to determine its policy <cit.>. In the latter direction, Mandlekar et al. <cit.> proposed a technique where the demonstrator directly modifies a valid trajectory using a perturbation technique like fast gradient sign method (FGSM) <cit.> to create adversarial trajectories that are then demonstrated to the RL agent. The RL agent trains with both clean and adversarial demonstrations so that the learned policy can perform effectively even in the presence of adversarial demonstrations. Our work in this paper is complementary to this research and investigates options as a means to improve the rewards received by an RL agent in the presence of adversarial trajectory demonstrations.
Recently, authors <cit.>, <cit.> have also investigated adversarial RL as a competitive zero-sum game where an adversarial demonstrator and an RL agent interact with each other but the learning objectives of the demonstrator and the RL agent are known to be directly contradictory to each other. Experimental results with simulated demonstrations of body movements on robotic figures showed that the demonstrator could successfully use its calculated policies to determine actions that misguided the RL agent to learn incorrect actions and lose stability instead of learning its intended task like walking or kicking a ball. In contrast to these scenarios where the demonstrator explicitly reveals its motive to make the RL agent fail by selecting incorrect states and actions, our research considers a more practical scenario where the demonstrator tries to stealthily modify some demonstrations that could make the RL agent fail, without revealing the demonstrators adversarial motives to the RL agent.
Recently, similar to some of the findings in our paper, authors <cit.> have demonstrated that adversarial attacks on reinforcement learning results in reduced rewards to the learning agent in simulated and physical robotic mobility tasks.
Another direction on adversarial RL integrated the techniques of inverse reinforcement learning <cit.> and generative adversarial networks <cit.> in the generative adversarial imitation learning (GAIL) framework <cit.> where trajectories are generated both by an expert using the expert policy, and, by a generator using the policy being learned. A discriminator evaluates the source of these trajectories and the learned policy is deemed as converged when the discriminator is unable to distinguish whether the trajectory was generated by the expert versus the generator. GAIL has also been extended to state-only observations with minimum demonstrations using sparse action guided regularization <cit.>, and, to generative adversarial imitation from observation (GAIfO) that uses a cost function that depends on state observations only <cit.>. The main difference between our work and GAIL is that, whereas, in GAIL the adversary or generator's objective is to update the policy being learned for faster convergence to an optimal policy. our work considers that the adversary's objective is to demonstrate incorrect trajectories to misguide the learner that is learning the policy.
Our research is closely related to techniques for imitation learning with imperfect expert demonstrations. In these techniques, a rank or confidence score for each trajectory is provided either as input or via learning. This score is then used to update the trajectory's rewards and selectively include the trajectory in the training during learning. In <cit.> trajectories are associated with a score or rank that is provided as input or self-generated and used to revise the rewards of sub-optimal trajectories using inverse reinforcement learning. In <cit.>, authors proposed techniques called 2IWIL and ICGAIL that use semi-supervised inverse reinforcement learning techniques to calculate a confidence score for unlabeled trajectories while using a small set of confidence score-labeled trajectories. An inverse dynamics function is learned in <cit.> to calculate a transformed trajectory from each expert trajectory followed by using a distance measure between the expert and transformed trajectory to determine a feasibility score for the expert trajectory. In <cit.>, anomaly detection between trajectories to boost or penalize the rewards associated with a trajectory is proposed. Independent of imitation learning, trajectory classification and trajectory anomaly detection techniques <cit.> have been proposed in literature to determine if the path followed by a vehicle to travel between two locations conforms to the usual set of travel routes between those locations. In this paper, instead of determining confidences or recalculating rewards for demonstrated trajectories, we first partition demonstrated trajectories and then repair or discard trajectory parts based on a metric calculated from spatial and temporal features of trajectories. Some of the aforementioned techniques such as confidence measures, feasibility scores and trajectory anomaly metrics could also be used in conjunction with our technique to make the decision to repair or discard demonstrated trajectories.
Options or hierarchically abstract policies have been proposed as a framework to improve the planning quality and computation time of policies <cit.>. Recently, the option-critic architecture <cit.> has generalized the problem of determining options for a task using two components called the option and critic that work in tandem with each other. The option component evaluates the options based on current parameters while the critic component updates the parameters of the policy underlying options by calculating value and objective functions. In <cit.> authors have proposed methods to automatically calculate options from data without using human-specified parameters and option-related information. Our work in this paper applies the framework of options to the address issues in adversarial reinforcement learning.
§ IMITATION LEARNING WITH ADVERSARIAL EXPERTS
Preliminaries. We formalize the reinforcement learning framework using a Markov Decision Process (MDP) given by (S, A, T, R, γ) where S denotes the set of states and A denotes the set of actions for the learning agent, T denotes a state to action transition function specifying the forward dynamics model of the environment, where T(s, a, s') is the probability of the agent reaching state s' when it takes action a at state s, R: S × A → denotes a reward function that gives a reward received by the agent by taking action a at state s, and γ is a discount factor. A policy π: S → [0,1]^|A| is a state to action mapping that prescribes a probability distribution P(A) over the action set. The objective of the RL algorithm is to determine an optimal policy that maximizes the expected rewards, that is, π^* = max_π𝔼(∑_t=0^∞γ^t R(s_t, a_t)). Let P^* = P(s|π^*) = P(s, a^*=π^*) denote the probability distribution of state-action pairs while following the optimal policy π^*. In imitation learning, human experts provide demonstrations in the form of state-action sequences called trajectories that represent the policy. The i-th trajectory is denoted by τ_i^π = (s_i,k,a_i,k)_k=0^H, where π is the policy used to generate the actions in τ_i and H denotes an episode's horizon or the average length of a trajectory. For the sake of legibility, in the rest of the paper, we use a_i,k = π(s_i,k) as a shorthand for a_i,k = max_a π(s_i,k).[Usually the expert demonstrates actions, a_i,k, only and the states are given by the agent's forward dynamics model T(s_i,k, a_i,k, s'_i,k)] Let π_θ denote the policy learned using imitation learning where θ is a policy parameter (e.g., a set of weights in a policy network). The objective of imitation learning is to determine the optimal policy by finding an optimal policy parameter θ^* that minimizes the expected loss between the actions from the optimal policy provided via the expert demonstrations and the actions as per the learned policy π_θ, that is, θ^* = min_θ𝔼_(s_i,k, a^*_i,k) ∼ P^* L(a^*_i,k, π_θ(s_i,k)). It is assumed that the expert performs its actions following the optimal policy, so a^*_i,k = π^*(s_i,k), and, consequently, the state-actions pairs in the expert trajectories conform to P^*, that is, ∀ i, k, (s_i,k, a^*_i,k) ∼ P^*. The value of policy π is given by V^π = 𝔼[∑_i=0^H R(s_i, a_i): a_i = π(s_i)].
For our problem setting, we consider a mix of benign and adversarial experts. Benign experts provide clean trajectories to the learner that follow the optimal policy and demonstrate the correct way to perform the task. We denote 𝕋_clean as the clean trajectory set, π_clean as the policy learned via imitation learning from clean trajectories and τ_clean as a trajectory generated while using policy π_clean. An adversarial expert, on the other hand, demonstrates adversarial trajectories that are constructed by modifying clean trajectories, and, consequently, do not conform to the optimal or clean policy. The adversarial trajectory set, adversarial policy and an adversarial trajectory are denoted by 𝕋_adv, π_adv and, τ_adv respectively. By definition of π_adv not being an optimal policy, it yields lower value than π_clean, that is, V^π_adv/V^π_clean < 1.
For our problem, we denote a trajectory set as: 𝕋 = (𝕋_clean∪𝕋_adv, η, {γ_i}), where η∈ [0, 1] denotes the fraction of trajectories that have been modified and γ_i∈[0, 1] denotes the fraction within the i-th trajectory that has been modified. Mathematically, η = |𝕋_adv|/|𝕋_clean∪𝕋_adv| and γ_i = ∑ k/|τ_i|: (s_i,k, a_i,k) ∈τ_i ∧ a_i,k≠π_clean(s_i,k). The values of 𝕋_clean∪𝕋_adv, η and γ_i are known to the adversarial expert while the learning agent only knows 𝕋_clean∪𝕋_adv.
We have divided our proposed technique into two parts. First, we describe the options framework to learn policies for sub-tasks from partial trajectories. Then, we develop a trajectory divergence measure between a demonstrated trajectory and known or clean trajectories that can be used to decide whether to accept or reject the demonstrated trajectory parts.
§.§ Policy Repair Using Options
We propose an options-based framework for policy repair where, instead of learning a policy over the entire state-action space, the state-action space is partitioned into subsets and a policy is learned for each part. Without loss of generality, we assume that the partition is done temporally - a trajectory τ is partitioned into M equal parts, and the i-th part (i=0, ..., M-1) is denoted by τ_i. Intuitively, this partition corresponds to dividing the end-to-end or full-horizon task into subtasks. The main idea in options is to learn a policy for each sub-task. Formally, an option for the i-th part is defined as ω_i = (I_i, π_i, β_i), where I_i is the set of initiation or start states for sub-task i, π_i is the optimal policy for solving sub-task i, and β_i is the termination or end states for sub-task i. As before, π_θ_i^* is learned via imitation learning and given by θ_i^* = min_θ_i𝔼_s, a^* ∼ P_i^*(s) (L(a^*, π_θ_i(s)) where P_i^*(s) = P(s|π_i^*).
§.§.§ Using Trajectory Divergence to Accept/Reject Trajectories
A core aspect of our options-based policy repair technique is to be able to determine the divergence between an unknown (whether it is benign or adversarial) demonstrated trajectory and a clean trajectory, that is, one that is guaranteed to be non-adversarial. This divergence measure can then be used to decide whether to accept or reject the demonstrated trajectory. However, a straightforward approach of making the trajectory accept/reject decision based on a single metric-based divergence measure might not work. For instance, an adversarial expert might demonstrate trajectories that have low divergence with clean trajectories, but inject a few incorrect moves or actions at key states in the trajectories that result in the agent either failing to do the task or doing it sub-optimally.
Again, a demonstrated trajectory might represent a previously unseen but correct and possibly improved way of doing the task. This trajectory would have a higher divergence measure with known, clean trajectories and if the accept/reject decision is based on the divergence measure only, it would end up getting an incorrect, reject decision. To address these challenges, we propose a divergence measure that combines two commonly used trajectory divergence measures with a supervised learning-based classification technique, as described below.
Occupancy Measure (OC). The first trajectory divergence measure we use is the occupancy measure <cit.>. It represents the number of times state-action pairs along a given expert trajectory are visited while using the (clean) policy. The occupancy measure of a demonstrated trajectory τ = ((s_0, a_0) (s_1, a_1), …, (s_|τ|, a_|τ|)) with respect to a clean trajectory τ_clean generated while following the clean policy π_clean, is given by :
OC_τ = ∑_(s_i, a_i) ∈τ_cleanπ^*(a_i|s_i) ∑_t=0^|τ|γ^t p(s_t=s_i | π_clean),
where γ∈ [0, 1] is a discount factor. Clearly, OC_τ has higher values when the demonstrated trajectory, τ, is closer or similar to the clean trajectory, τ_clean. The minimum value of OC_τ = 0 happens when there is no overlap between the state-action pairs of the two trajectories. The occupancy measure is a suitable metric for making the accept/reject decision of a demonstrated trajectory if it overlaps with many state-action pairs of clean trajectories. However, a limitation of using it as the only decision variable is that if the demonstrated trajectory is non-adversarial and similar to a clean trajectory, but overlaps with very few or no state-action pairs in it, the occupancy measure would be close to or equal to zero and give an incorrect decision of rejecting the trajectory.
Fréchet Distance (FD). Our second trajectory divergence measure is the Fréchet distance <cit.>. It gives the distance between two polylines while considering the spatial and temporal ordering of the points on them. Mathematically, the Fréchet distance between an expert trajectory τ and a clean trajectory τ_clean is given by:
FD_τ = min_α, βmax_t ∈ [0,1] d(τ(α(t)), τ_clean(β(t))),
where, d(·) gives the Euclidean distance or L2 norm between two trajectory points on τ and τ_clean respectively. α, β are functions that take an argument t ∈ [0,1] and return an index into τ and τ_clean respectively, with α(0) = β(0) = 0, and α(1) = |τ|, β(1) = |τ_clean|. The Fréchet distance calculation iterates over different functions for α and β, determines the maximum distance between ordered pairs of points on τ and τ_clean for each α and β combination iterated over, and, finally, returns the minimum of these maximum distances. When both expert and clean trajectories are identical, the Fréchet distance has its smallest value, 0. As the two trajectories get further apart, the Fréchet distance increases. For the last example from the previous paragraph, using the Fréchet distance rectifies the incorrect decision given by occupancy measure as the Fréchet distance for a demonstrated trajectory with high similarity but little or no overlap in state-action pairs with a clean trajectory would have a low value and yield a correct decision to accept the trajectory.
To make an accept/reject decision of a trajectory based on its occupancy measure and Fréchet distance values, we train a classifier, χ: OC × FD →{ Accept, Reject} via supervised learning. The classifier's training set contains the OC and FD values sampled from different clean and adversarial trajectories, along with a label, λ_τ for each trajectory sample, given by:
λ_τ = Accept if R(τ) ≥ (1-ϵ_p)R_max
Reject otherwise
Handling Benign Divergent Trajectories. The classifier χ suffices to admit trajectories based on the similarity of their spatio-temporal features to known, benign trajectories. However, a demonstrated trajectory that shows a novel way to perform the task and is suitable for learning from, might have a high divergence measure and, consequently, get rejected by the classifier. To address these false positives, we augment the classifier's prediction with a special condition that reverses only the reject decisions on a trajectory if the ratio of the returns (sum of rewards) between the demonstrated and clean trajectories is above a fraction 1-ϵ_p. The advantage of using the return ratio only is that it can be calculated quickly using the agent's reward function and demonstrated trajectory data, without requiring access to the agent's policy or value functions that require complex, time-consuming calculations.
Algorithm <ref> gives the pseudo-code algorithm for repairing trajectories with our options-based framework using the above divergence measures and trajectory accept/reject decision classifier. Given a set of guaranteed, clean trajectories, T_clean and a set of demonstrated trajectories we first split trajectories from each set into M parts (line 3). For each part, we determine if it can be accepted into the training set for the imitation learning algorithm using the classifier's 'Accept' prediction or return ratio criteria (lines 5-7). If acceptable, the demonstrated trajectories are included with the clean trajectories for training the policy π^*_i for sub-task i via imitation learning (line 8-9). The initiation and termination states for option i are also recorded along with policy π^*_i within option ω_i. An important requirement for using options is option chaining which determines when to terminate option ω_i and how to select the next option ω_i+1, so that an end-to-end policy can be formed in the state-action space of the problem. While chaining is done at policy execution time <cit.>, we create a dictionary D_chain: S → S while creating the set of options to speed up execution. D_chain is constructed in Lines 15-18 in Algorithm <ref> by recording the closest state s_j ∈ I_i+1 is closest in terms of L2 norm distance to a state s_i ∈β_i.
§.§.§ Option Chaining
Algorithm <ref> shows the option chaining at run-time to enable executing successive policies for sub-tasks using options. As shown in Lines 8-12 of Algorithm <ref>, to determine if policy π_i in option ω_i is about to terminate, a state s_cur that is reached by the agent while executing π_i is checked for proximity within an L2 norm distance of ϵ_chain from any state in the termination set β_i. If any such states exist in β_i, the closest such state to s_cur, s_end, is selected (line 10) and s_cur is updated to a state in the initiation set of the next option ω_i+1 given by D_chain(s_end) (line 11). The current option is also updated to the option for the next sub-task (line 12).
§ THEORETICAL ANALYSIS
In this section, we formalize our trajectory repair technique described in Section <ref>. First, we show that using trajectory divergence measure alone gives a weak condition for the accept/reject decision for demonstrated trajectories. We then show that augmenting this decision with a rewards-based rule (following Algorithm <ref>, Line 7) guarantees that accept/reject decisions are consistent with benign and adversarial trajectories. Finally, we show that the above results remain valid for part trajectories so that they can be applied to our options-based, trajectory repair technique.
Definition defcounter. Dominated Policy. Given two policies π and π' we say π is dominated by π' if V^π/V^π' < 1-ϵ_p, where ϵ_p is a constant. We denote this in shorthand as π≺π'.
defcounter
Definition defcounter. Divergent Trajectories. Let τ^π and τ^π' represent two trajectories that are sampled from two policies π and π'. We say τ^π and τ^π' are divergent if D(τ^π, τ^π') > δ, where D is a divergence measure between τ^π and τ^π' and δ is a constant.
defcounter
Definition defcounter. Local Policy Repair Function. Given state s and two policies π and π' with π≺π', a local policy repair function is a transformation f_rep: S × [0, 1]^|A|→ [0, 1]^|A|, such that, D̃(π_s || f_rep(s, π'_s)) < ϵ_div, where D̃ is a distance measure between two probability distributions.[Note that f_rep(s, π'_s) transforms π' to a new policy, say π”]
defcounter
Definition defcounter. ϵ-repair set: Given an initial policy π and a target policy π^tar, the ϵ-repair set for π, π^tar, ρ_π→π^tar, is a set of states such that the policy π' obtained by applying f_rep(s, π) to every s ∈ρ_π→π^tar satisfies V^π'/V^π^tar≥ 1-ϵ_p.
defcounter
Theorem thmcounter. If π≺π', then trajectories τ^π and τ^π' sampled from π and π' respectively are divergent .
thmcounter
Proof. (By contradiction.)
Let us suppose π≺π', but trajectories τ^π and τ^π' are not divergent, that is, D(τ^π, τ^π') ≤δ. Without loss of generality, we assume δ = 0. This implies that the divergence measure between τ^π and τ^π' is zero, and, consequently, ∀ i, s_i^τ^π = s_i^τ^π'. Now, from the definition of a dominated policy in Definition 1, it follows that V^π≠ V^π'. Recall, V^π= 𝔼[∑_i=0^H R(s_i, a_i): a_i = π(s_i)], and, so, there must be at least one time-step, i, at which, R(s_i^π, a_i^π) ≠ R(s_i^π', a_i^π'). This implies, either s_i^π≠ s_i^π', or, s_i^π = s_i^π', but a_i^π≠ a_i^π'. The latter case, implies different actions are taken at state s_i by policies π and π', which leads to different next states s_i+i^π≠ s_i+1^π', reached by policies π and π'. In both cases, there are at least two states on trajectories generated from π and π' that are distinct from each other, that is, s_i^τ^π≠ s_i^τ^π', for at least some i. This contradicts our assumption, ∀ i, s_i^τ^π = s_i^τ^π'. Hence proved. □
However, we note that converse of Theorem 1 is not valid - when D(τ^π, τ^π') > δ, it is not guaranteed that policy π' will dominate π. We give an informal proof sketch: if D(τ^π, τ^π') > δ, there must be at least one i where s_i^τ^π≠ s_i^τ^π'. We cannot make any guarantees about the relative rewards at these states while using policies π' and π. If R(s_i^π, a_i^π) > R(s_j^π', a_j^π'), s_i ≠ s_j, we could get V^π > V^π', which would imply that π' is dominated by π. On the other hand, if R(s_i^π, a_i^π) < R(s_j^π', a_j^π'), π' dominates π. This means that trajectory divergence is a necessary, but not a sufficient condition for policy dominance. This necessitates an additional condition to select states from S to construct the ϵ-repair set. For this, we propose the following rule:
Rule 1. For a state s_i^π to be added to ϵ-repair set, ρ_π→π', R(s_i^π,π_s_i^π)/R(s_i^π', π_s_i^π') < 1-ϵ_p.
The above rule states that a state can be added to the ϵ-repair set if the reward at that state by selecting an action using policy π is lower than selecting an action using policy π'. Based on this rule, we have the following theorem about the convergence of trajectories based on their divergence measure.
Let M_max denote the maximum number of states in S where R(s_i^π,π_s_i^π) < R(s_i^π', π_s_i^π').
Theorem thmcounter. If Rule 1 is applied M times to build the ϵ-repair set ρ_π→π', then as M → M_max, V^π/V^π'→ 1 and D(τ^π, τ^π') → 0.
thmcounter
Proof.[For legibility, we give the proof for ϵ_p = 0, it can be extended easily to ϵ_p = 0^+.] Recall that V^π= 𝔼[∑ R(s_i^π, π_s_i^π)] and V^π'= 𝔼[∑ R(s_i^π', π'_s_i^π')]. The difference between these two terms can be written as:
V^π' - V^π = 𝔼[∑ R(s_i^π', π'_s_i^π') - R(s_i^π, π_s_i^π)]
= 𝔼[R(s_1^π', π_s_1^π') + ... + R(s_k^π', π_s_k^π') + ... + R(s_M^π', π_s_M^π')]
- 𝔼[R(s_1^π, π_s_1^π) + ... + R(s_k^π, π_s_k^π) + ... + R(s_M^π, π_s_M^π)]
We use Δ V_0^π' - π as a shorthand to denote the initial value of V^π' - V^π (before applying Rule 1), and, Δ V_1^π' - π as its value after applying Rule 1 once, Δ V_2^π' - π as its value after applying Rule 1 twice, and, so on. If we select states (s_k^π, s_k^π') via Rule 1 and apply f_rep(s_k^π, π_s_k^π), then because R(s_k^π', π_s_k^π') - R(s_k^π,π_s_k^π) > 0, therefore, Δ V_0^π' - π > Δ V_1^π' - π. Similarly, Δ V_1^π' - π > Δ V_2^π' - π. If we continue in this manner, V^π' - V^π becomes successively smaller and smaller. Finally, when f_rep() has been applied at most M_max times, Δ V_M_max^π' - π= 0. At this point, V^π' = V^π, or V^π/V^π' = 1. In the limiting case, when state pairs (s_k^π, s_k^π') have R(s_k^π', π_s_k^π') - R(s_k^π,π_s_k^π) ≈ 0^+, we get, V^π/V^π'→ 1.
In a similar manner, applying f_rep(s_k^π, π_s_k^π) makes π_s_k = π'_s_k, and, consequently, the same state is reached by taking action π'_s_k at s_k^π. This makes, D_0(τ^π, τ^π') > D_1(τ^π, τ^π') > D_2(τ^π, τ^π') > ..., where the subscript denotes the number of times Rule 1 and f_rep() have been applied. When f_rep() has been applied M_max times, we get D_M_max(τ^π, τ^π') = 0, and, in the limiting case, when state pairs (s_k^π, s_k^π') have R(s_k^π', π_s_k^π') - R(s_k^π,π_s_k^π) ≈ 0^+, D(τ^π, τ^π') → 0. □.
Lemma thmcounter. If policies π and π', π≺π', are divided into sub-policies π_1, π_2, ... π_M and π'_1, π'_2 ... π'_M, then for at least one interval m ∈{1, ..., M}, π_m ≺π_m'
thmcounter
Proof. (by contradiction) From Definition 1, if π≺π', then V^π < V^π'.[For simplicity and without loss of generality, we slightly relax Definition 1 by assuming ϵ_d=0, which gives V^π/V^π'<1] Suppose π≺π' and policies π and π' are divided into sub-policies, π_1, π_2 and π'_1, π'_2 respectively, and, both sub-policies of π are not dominated. That is, V^π_1 ≥ V^π'_1 and V^π_2 ≥ V^π'_2. Rearranging and adding terms of the last two inequalities, we get, V^π_1 - V^π'_1 + V^π_2 - V^π'_2 ≥ 0, or, (V^π_1 + V^π_2) - (V^π'_1 + V^π'_2) ≥ 0. Substituting, (V^π_1 + V^π_2) = V_π and V^π'_1 + V^π'_2 = V^π', we get V^π - V^π' > 0, or, V^π > V^π', which contradicts the definition of π≺π'. Therefore, our assumption that V^π_1 ≥ V^π'_1 and V^π_2 ≥ V^π'_2 (both sub-policies are not dominated) is incorrect, and at least one of the sub-policies must be dominated. This proof can be easily extended beyond two sub-policies by induction. □
Theorem thmcounter. Repairing partial policy via options is faster than repairing full horizon policy.
thmcounter
Proof.
§ EXPERIMENTAL RESULTS
§.§ Experimental Setup
Environment. For evaluating our option-based adversarial RL algorithm, we used the environment available within AI Gym <cit.>. The problem consists of landing an airborne two-legged spacecraft at a specific location called the landing pad within a 2D environment akin to the surface of the Moon. The state space consists of an 8-dimension vector given by the 2-D coordinates of the center of the spacecraft, 2-D linear velocity, orientation and angular velocity and whether both legs of the spacecraft are on the ground. The initial state of the spacecraft consists of random coordinates towards the top of the environment and random initial velocity. The action space of the spacecraft consists of four actions: to fire its main, left or right engines or do nothing (no-op). The agent receives a reward of 320 points of landing on both legs on the landing pad, a penalty of -100 points for crashing, while maneuvering the spacecraft incurs a penalty of -0.3 for using the main engine and -0.03 for the left or right engine. For our baseline reinforcement learning algorithm we used the deep Q-network learning (DQN) algorithm available via stable baselines <cit.>. The algorithms were implemented using the following open source libraries: Tensorflow 1.15, OpenAI Gym 0.18 and Stable Baselines 2.10.
Generating clean and adversarial trajectories. To generate clean trajectories we trained a Deep Q-network (DQN) algorithm in the LunarLander environment for 2.5 × 10^5 time-steps, all other algorithm hyper-parameters were set to the values given in RL Baselines Zoo <cit.>. We generated 1000 clean trajectories. These trajectories were then modified using the adversarial trajectory modification algorithms described in Section <ref>. For the adversarial attacks, we used η = {0.3, 0.6, 0.9}, γ_i = {0.3, 0.6, 0.9}, and attack location as {BEG, MID, END, FLP} giving rise to 36 different adversarial trajectory sets, each comprising 1000 trajectories.
We then trained policies via imitation learning with these adversarial trajectory sets.
§.§.§ Trajectory Modification Attacks by Adversary
We considered two adversarial attack strategies for modifying expert demonstrations: 1) A directed attack strategy that requires access only to clean trajectories demonstrated by a benign expert, 2) A gradient-based attack strategy that requires access to the learner's policy network and rewards.
A directed attack targets sequential locations inside a trajectory starting from an attack start location, that could be either at the beginning (BEG), middle (MID) or end (END) of the trajectory. The actions at γ_i|τ_i| consecutive locations following the attack start location are then modified using an action modification function ϕ: A → A, given by ϕ(a_i,k) = max_s'Δ_s(s_i,k, s') where, s' = max_a' T(s_i,k, a', s_i,k+1)). That is, ϕ replaces a_i,k with the action that takes the agent to a state s' that is farthest along a state-distance metric, Δ_s, from s_i,k+1. The directed attack is a straightforward, fast, yet effective attack as it does not require the adversary to have information about the learner's reward function. While it can be realized by the adversary with a lower attack budget it can also be detected relatively easily by the learner.
Gradient-based Attack. The gradient-based technique is inspired the hot-flip (FLP) technique <cit.> for text perturbation. The technique identifies the minimum number of characters and their locations within a text string that need to be modified so that the text string gets mis-classified by a supervised learning-based model. We apply a similar idea for our gradient-based attack where the adversary identifies locations or indices in the trajectory that need to be modified, by considering the gradient of the objective or reward function with respect to the observation, denoted by ∂ r_i,k/∂ o_i,k, k = 1...|τ_i|, and swap the observations corresponding to maximum and minimum gradients for γ_i|τ_i| iterations. The pseudo-code for the gradient-based attack is shown in Algorithm <ref>,
Note that for the gradient-based attack, the adversary needs to have knowledge about the learner's reward function. We note that researchers have proposed sophisticated but also more computationally complex attacks for modifying actions <cit.> that aim to reduce the reward received by an RL agent, although not within the context of imitation learning. Our attacks are computationally simpler but still achieve the desired effect of reducing the learner's rewards. The policy repair technique proposed in the paper could also be used in conjunction with any of these attacks.
Note that because the adversary modifies clean trajectories to generate adversarial trajectories, it knows the partition of 𝕋 into 𝕋_clean and 𝕋_adv and can calculate γ_i and η. The learner on the other hand does not know the partition and is not aware of these parameters.
§.§ Experimental Validation
We evaluated the performance of our proposed technique using the following hypotheses:
H1. Adversarial Perturbation Effect:
Increasing the amount of perturbation in the expert demonstration trajectories decreases the performance of conventional imitation learning.
H2. Trajectory Accept/Reject Decision based on Trajectory Divergence: A supervised learning based classifier that combines the occupancy measure and Fréchet distance metrics of demonstrated trajectories can identify parts of the trajectories that have been adversarially modified with acceptable accuracy.
H3. Trajectory Repair: The proposed options-based, trajectory repair technique (Algorithms <ref>) can avoid learning from parts of demonstrated trajectories that have been adversarially modified so that the learning agent's performance does not degrade
H4. Explainability: Using our options-based, trajectory repair technique (Algorithms <ref>) it is possible to determine which portions of a demonstrated trajectory cause the learning agent's performance to degrade.
H5. Time Overhead: The option chaining technique (Algorithm <ref> increases the time overhead of calculating the policy over a conventional RL-based technique by a small, acceptable amount.
To validate our Hypothesis 1 that the strength of the adversarial perturbation in the demonstrations reduces the rewards and learning time of the learned model, we evaluated the effect of gradually increasing the number of trajectories modified (η) and the fraction of modified actions within each trajectory γ_i. While it is intuitive that increasing either η and γ_i will reduce the learned model's rewards. We want to understand the degree to which each of these parameters affect its performance, while applying the perturbations at different locations in demonstrated trajectories. Figure <ref> shows the effect of changing the amount of perturbation in the expert demonstrations on the cumulative median rewards for different attack locations, BEG, MID and END, of the directed attack and for the gradient-based attack (FLP). We observer than when a small fraction of the expert demonstration set is changed (η =0.3), the rewards are affected nominally for all types of attacks. However, for higher values of η, the median rewards drop significantly between -200% and -400%. Within a fixed value of η, 0.6 or 0.9, we see that changing γ_i (no. of actions modified inside each perturbed trajectory) also has the effect of reducing the rewards as the expert demonstrations contain more incorrect actions to learn from. We also observe that the decrease in rewards is less for attack locations MID and END, as compared to BEG for the directed attack. This makes sense because misguiding the learned model to make mistakes via demonstrating incorrect actions early on makes the trajectories veer further off from the correct course and makes it difficult for the learned model to recuperate and return on-track. Finally, we show the number and standard deviation of episodes completed (dashed line) for the different perturbation amounts, averaged over the different attack types. The number of episodes increases with increase in perturbation as with more perturbation the agent fails quickly, right after starting the task and restarts another episode, that is, there are many shorter, failed episodes with η = 0.6, 0.9 than with no or low perturbation (η = 0, 0.3). Overall, these results validate Hypothesis 1 while showing that η (fraction of trajectory set that is modified) has a greater effect than γ_i (fraction of actions modified inside each modified trajectory) on the successful task completion, and, consequently, the rewards of the learned model.
For validating Hypothesis 2, we trained a classifier via supervised learning and evaluated its prediction accuracy and F1-score for trajectory accept/reject decisions. For the training set of the classifier we sampled 400 trajectories , corresponding to nearly 100,000 state-action pairs. The training trajectories were either clean or perturbed with perturbation strengths drawn uniformly from η, γ_i ∈{0.3, 0.6, 0.9} and perturbation locations drawn uniformily from {(BEG, MID, END, FLP)}. Our training set is not very large[We used 400 trajectories in the training set as the sample diversity did not increase beyond this value for our tested LunarLander environment.] and to improve classification accuracy with such smaller training sets, ensemble learning <cit.>, that combines the predictions from multiple classifiers, has been proposed as a suitable technique. We used an ensemble of classifiers with individual classifiers as: K-nearest neighbors with no. of neighbors as 2, support vector machine with polynomial kernel function, decision tree with max-depth of 9, and ada boost classifier with number of estimators =50. For the final prediction, we used ensemble voting with uniform weights given to individual classifier predictions followed by a majority voting between them. The classifier algorithms were implemented using the scikit-learn 1.2 library and the hyper-parameters in the different classifier algorithms were set to their default values given in the library. Figure <ref> shows a profile of the learned model of classifier for different occupancy measure and Fréchet distance values. It indicates that the general rule learned by the classifier is to reject trajectories with very low (near zero) occupancy measure value or very high (>∼ 1.5) Fréchet distance values while for intermediate values the classification boundary exhibits a polynomial dependency on occupancy measure and Fréchet distance values. We tested the classifier with a test set 1000 different trajectories that were either full length (end-to-end), or part trajectories that were either half or a third of the full length, sampled from various portions of trajectories. The classification accuracies and F1 scores for different trajectories are given in Table <ref>. For all of the tested trajectories, the false negatives (accepting an adversarial trajectory) were below 10%. Overall, these results validate Hypothesis 2 by showing that the classifier can be used a reliable method to identify and make accept/reject decisions for demonstrated trajectories.
For validating Hypothesis 3, on trajectory repair, we experimented with 2 and 3 partitions of the trajectory. For each partition, we considered that the adversary had perturbed the trajectory at different locations BEG, END and FLP. Table <ref> shows the results for trajectory repair using our proposed technique when trajectories are divided into two equal parts, including the decision made by the trajectory accept/reject classifier for full and two part trajectories (with and without options), and the rewards before and after trajectory repair. Note that the first three columns of Table <ref>, perturb location, η and γ_i are given for legibility and not known by the learning agent or repair technique. The results show that for all cases adversarially modified trajectories can be identified and repaired at the portions that were modified, while preserving the portions that were not modified, and preventing the performance of the learning agent from degrading as shown by the median rewards similar to the clean trajectory reward values. The main impact of our trajectory repair technique is seen for perturb location END, as part trajectories are repaired to improve the reward to values similar to clean trajectory rewards. Moreover, using trajectory repair, we are able to detect and use the clean part of the trajectory, thereby improving sample efficiency. For perturbation location BEG, trajectories are mostly rejected because, as the decisions are sequential, modifying actions early on in an episode result in incorrect or sub-optimal actions downstream in the episode. For η=0.3, for all perturbation locations, we see that some part trajectories get rejected by the classifier when not using the return ratio condition in Line 7 of Algorithm <ref> (marked with asterisk in Table <ref>. This happens because for our LunarLander task, different episodes start from different initial locations and their occupancy measure and Frétchet distance values show a larger divergence in the initial part of episodes. However, for all these cases, we note that the reward is not degraded using the return ratio condition. The gradient-based attack, FLP, is more difficult to detect as the perturbation locations made the adversary in the trajectories are selected strategically and are not successive, as in the directed attack. The bottom part of Table <ref> shows that our technique works successfully for gradient-based attacks as well and is able to discern and reject modified trajectories and learn only from the clean parts of trajectories, when available. Our experiments with perturb location MID (not reported here) showed similar results as BEG and END - parts of trajectories before the perturb location were accepted by the classifier while those following the perturb location were rejected as they were downstream and affected by the perturbation; rewards in all cases were restored to values similar to clean trajectories following trajectory repair.
Table <ref> shows the trajectory repair results when the trajectories are divided into three parts. We report the results for gradient-based (FLP) attacks only as they are more difficult to detect. Here too, we see that the trajectory repair technique is able to identify parts of trajectories that have lower perturbation and can be used for learning without degrading the task performance, as shown by the restored reward values for these cases.
§.§ Ablation Experiments
We performed two ablation experiments by removing certain features of our algorithms to understand their effect on the results
.
Effect of Return Ratio Condition. In the first experiment, we tested the effect of using the return ratio condition to override a reject decision made by the trajectory classifier ( Line 7, Algorithm <ref>). The reward ratio is provided as a guard rail against false positives from the classifier so that correct, benign trajectories that show a new way to peform the task and have a higher divergence measure from known, clean trajectories do not get discarded. For this experiment, we varied the perturbation strengths, η={0.3, 0.6, 0.9}, the perturbation locations, {BEG, MID, END, FLP}, and the number of trajectory parts, {2, 3}, and recorded the average fraction of trajectories that got changed from 'Accept' to 'Reject' when not using the reward ratio condition. For η={0.6, 0.9}, for all perturbation location, we observed that none of the classifier decisions were changed after removing the reward ratio, for both 2- and 3- part trajectories. This indicates that for higher perturbation strengths, false positives are absent or rare and the reward ratio condition is not triggered. For η=0.3, our results are shown in Figure <ref>. We see that for 2- part trajectories 50% of the trajectory is discarded, while for 3- part trajectories between 50-90% of the trajectory gets discarded. However, although discarding trajectories deteriorates sample efficiency, it does not affect the learning performance as the difference in the rewards with and without the reward ratio was nominal (within 1-3%). In general, our findings from this experiment indicate that when the perturbation strength is low and the divergence measure has difficulty in classifying a trajectory accept/reject decision, the reward ratio condition is important to prevent valid but divergent trajectories from getting discarded.
Overhead introduced by Options. Options are the key component of our technique as they facilitate partitioning trajectories and retaining and learning from only the usable part trajectories. To determine the feasibility of our technique there are two important questions related to using options that need to be addressed: does using options affect the learning performance in terms of rewards and can options be used without degrading the rewards? To answer these questions, we performed our next ablation experiment. We trained the agent to learn to play the LunarLander game using full trajectories versus using part trajectories via options, and recorded the difference in median rewards for these two settings for different perturbation strengths, η={0.3, 0.6, 0.9}, different perturbation locations, {BEG, MID, END, FLP}, and different number of trajectory parts, {2, 3}. Our results are shown in Figure <ref>, the black lines at one end of the bars show the reward without options and the bars show the change in rewards using options. We see that when there is little perturbation (η=0,3), using options has negligible change in rewards, around <1-2%. When the perturbation increases to η={0.6, 0.9}, the rewards for using options either increases or decreases from the reward without options. This indicates that perturbed trajectories make it difficult to chain options. Also, as chaining options has to be done between every pair of trajectory parts, as the number of trajectory parts increases, the decrease in rewards from using options also becomes more pronounced. However, when options are repaired using Algorithm <ref>, the part trajectories can again be chained efficiently and the rewards are again restored to higher values, similar to those learned for clean trajectories. Overall, these experiments show that our approach of repairing part trajectories with options does not introduce significant overhead in the computations of the imitation learning algorithm.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we proposed a novel technique using options to selectively include portions of demonstrated trajectories for training the policy of an imitation learning-based agent in the presence of demonstrations given by potentially adversarial experts. Our results show that using our technique, the learned policy can prevent learning from portions of trajectories that would degrade the agent's reward. Our technique provides two main advantages: it improves the robustness of the policy training as well as the sample complexity of the demonstration samples without resulting in a significant overhead of the policy training time. Closely related to our research is the field of opponent modeling, cross-play and inter-play where agents build models of their opponents' behaviors from observations and train their policies by playing against those models. A potential problem in opponent modeling is deception by opponents where opponents can demonstrate incorrect behaviors via trajectories to misguide an agent. Our proposed trajectory repair technique could be used in such situations to identify deceptive trajectories by comparing them with trajectories of known or rational opponent behaviors and prevent learned policies from getting misled.
One of the requirements in our technique is that it requires a human to identify a base set of clean policies with which the agent's task is performed successfully. While most real-life domains require human subject matter experts to provide such feedback, techniques like inverse reinforcement learning that automatically update the reward function to improve the agent's performance could be used to reduce the technique's reliance on human expertise. Another aspect of our work is that it assumes that clean trajectories have low divergence between them, For tasks that can be solved in different ways, clean trajectories representing different ways to solve the task might have high divergence with each other. In such cases, the different ways of performing the task could be grouped into clean trajectory clusters, and the divergence with demonstrated trajectories could be determined for clean trajectory clusters to make the accept/reject decision for our technique.
We have used behavior cloning as our imitation learning algorithm. More sophisticated imitation learning algorithms like the data aggregation (DAgger) algorithm <cit.> or the generative adversarial imitation learning from options (GAIfO) <cit.> could be used in place of behavior cloning. These algorithms are likely to improve the performance of the imitation learning portion, and our proposed options-based technique could still be used with them to partition demonstrated trajectories and identify acceptable partial trajectories for learning.
Our proposed technique was aimed at enabling an agent to use imitation learning in the presence of adversarial trajectories. It is likely that smart adversaries will discover that its attacks are not effective against a learning agent that has used our technique to avoid accepting adversarial trajectories. It could then craft new types of adversarial trajectory attacks to evade the trajectory accept/reject decision classifier. Such situations could be modeled as a higher-level, adversarial game between the adversary and the learning agent and techniques from hierarchical reinforcement learning <cit.> and Bayesian games <cit.> could be used to solve them.
We envisage that further investigation of the options based technique for adversarial imitation learning described in this paper will lead to new insights into the problem of learning for demonstrations and could be used by a learning agent to quickly and robustly learn effective operations in new, open environments from clean as well as adversarial trajectories.
§ ACKNOWLEDGEMENTS
This work was supported by the U.S. Office of Naval Research as part of the FY21 NRL Base Funding 6.1 project Game Theoretic Machine Learning for Defense Applications.
abbrv
|
http://arxiv.org/abs/2306.06552v1
|
20230611005057
|
School Bullying Results in Poor Psychological Conditions: Evidence from a Survey of 95,545 Subjects
|
[
"Na Zhao",
"Shenglong Yang",
"Qiangjian Zhang",
"Jian Wang",
"Wei Xie",
"Youguo Tan",
"Tao Zhou"
] |
physics.soc-ph
|
[
"physics.soc-ph"
] |
Second-harmonic generation in the system with fractional diffraction
Boris A. Malomed^e,f
====================================================================
To investigate whether bullying and psychological conditions are correlated, this study analyzed a survey of primary and secondary school students from Zigong City, Sichuan Province. A total of 95,545 students completed a personal information questionnaire, the Multidimensional Peer-Victimization Scale (MPVS), and eight other scales pertaining to various psychological problems. The data showed that 68,315 (71.5%) participants experienced school bullying at varying degrees, indicating the prevalence of bullying among adolescents. The chi-square tests revealed a strong correlation between school bullying and psychological conditions. This correlation was further explored through multivariate logistic regression, showing that students who experienced mild bullying had a 3.10 times higher probability of emotional and behavioral problems, 4.06 times higher probability of experiencing prodromal symptoms of mental illness, 4.72 times higher probability of anxiety, 3.28 times higher probability of developing post-traumatic stress disorder (PTSD) , 4.07 times higher probability of poor sleep quality, 3.13 times higher probability of internet addiction, 2.18 times higher probability of poor mental health, and 3.64 times higher probability of depression than students who did not experience bullying. The corresponding probabilities for students who experienced severe bullying were 11.35, 17.35, 18.52, 12.59, 11.67, 12.03, 4.64, and 5.34 times higher, respectively. In conclusion, school bullying and psychological conditions are significantly correlated among primary and secondary school students, and the more severe the bullying, the higher the probability to suffer from psychological problems.
Keywords: school bullying; mental health; adolescents; logistic regression analysis
§ INTRODUCTION
The teenage years are a critical period for physical growth and development, mental maturity, personality formation, and the attainment of scientific and cultural knowledge. Psychological problems, such as depression, can lead to several negative outcomes in teenagers, including poor academic performance, alcohol abuse, and suicide <cit.>. The reasons for poor psychological conditions are complex, involving personal, family, and school factors. Schools, as large gathering places for primary and secondary school students, allow students to frequently interact with each other. These interactions, including school bullying, can significantly impact students’ psychological well-being <cit.>.
The increasing number of cases of school bullying in the 21st century has drawn widespread social attention to related studies <cit.>. Early studies showed that psychological conditions are strongly related to school bullying, which is defined as continued attacks by teachers or students on certain students within the school campuses and surrounding areas <cit.>. Traditional bullying can be divided into four categories: physical victimization, verbal victimization, social manipulation, and attacks on property <cit.>. School bullying is prevalent among teenagers globally: the proportion of teenagers affected by school bullying across different countries varies between 4.8% and 45.2% <cit.>.
Researchers have conducted in-depth studies on school bullying. For example, Cosma et al. <cit.> compared the cross-national trends of school bullying to examine the differences between cyberbullying and traditional bullying. Chudal et al. <cit.> investigated the prevalence of bullying and its various types among 21,688 13–15-year-old adolescents in developing countries. Pengpid et al. <cit.> studied the relationship between school bullying and psychological problems among adolescents in Southeast Asian countries. Pörhölä et al. <cit.> surveyed 8,497 students from several countries to examine the differences between various types of school bullying. Chen et al. <cit.> analyzed 4,051 bullied adolescents, revealing the relationship between school bullying and post-traumatic stress disorder (PTSD). Meanwhile, Zhou et al. <cit.> systematically studied the correlation between school bullying and poor sleep quality. However, the existing body of literature on the relationship between school bullying and psychological problems has shortcomings, including small sample sizes, lack of attention paid to adolescents in developing countries, lack of comparative studies on various types of bullying, and lack of analysis on multiple dimensions of psychological problems.
The present study aimed to fill this gap in the literature by surveying a large sample of students from developing countries. Furthermore, to offer detailed findings, this study sought to cover the four types of school bullying and distinguish between different degrees and different types of school bullying.
§ METHODS
§.§ Participants
This study involved a questionnaire survey of 95,545 primary and secondary school students. The sample included 27,128 (29%) primary school students from 71 primary schools, 43,124 (45%) junior high school students from 73 junior high schools, and 25,203 (26%) senior high school students from 14 senior high schools in Zigong City, Sichuan Province, China. The participants were aged 6–22 years (average age: 13.47 years), with primary school students mainly 6–12 years, junior high school students mainly 13–15 years, and senior high school students mainly 16–18 years.
§.§ Ethical Statement
All participants were informed of the purpose of the study, and participation was voluntary. Each participant and her/his parents signed written consent forms. To protect the participants’ privacy, the survey data were anonymized.
§.§ Questionnaire Survey Data
Basic information. Using the Basic Information Survey Questionnaire, the study collected data on participants’ age, school, living environment, family situation, and family environment, to analyze the reasons for their psychological problems from personal and family perspectives.
School bullying. The Multidimensional Peer-Victimization Scale (MPVS) <cit.> was used to investigate the prevalence of school bullying, including physical and verbal victimization, property damage, and insults from peers during the participants' growth process. The types of victimization included hitting, insulting, teasing, slandering, exclusion, spreading rumors, and so on. The participants were required to truthfully fill out the MPVS to reflect their experiences of school bullying.
Psychological problems. We assessed participants’ overall psychological well-being by measuring their psychological health status across eight dimensions. These dimensions corresponded to validated scales, including the Strengths and Difficulties Questionnaire - Short Version (SDQ-S) <cit.> for emotional and behavioral problems, the Prodromal Questionnaire - 16 Items (PQ-16) <cit.> for psychotic risk, the Generalized Anxiety Disorder 7-Item Scale (GAD-7) <cit.> for anxiety, the Children’s Revised Impact of Event Scale – 13 Items (CRIES-13) <cit.> for stress response, the Pittsburgh Sleep Quality Index (PSQI) <cit.> for sleep quality, the nine-item Internet Gaming Disorder Scale - Short Form (IGDS9-SF) <cit.> for internet addiction, the Patient Health Questionnaire - 9 item (PHQ-9) <cit.> for depression, and the Warwick-Edinburgh Mental Well-being Scale (WEMWBS) <cit.> for mental well-being. After participants completed the aforementioned scales, we evaluated participants’ psychological conditions based on the scale scores.
§.§ Severity of Psychological Problems
The students were classified according to the diagnostic criteria in Table <ref>, based on the results of the psychological health survey. All subsequent investigations in this study were conducted based on those criteria.
§.§ Statistical Methods
Based on the classification of psychological problems, as described above, we performed chi-square tests to validate the significance of the correlations between different variables, including the correlations between school bullying and the presence of psychological problems and between the degree of bullying and the presence of psychological problems. After confirming the correlations between school bullying and psychological problems, we further performed logistic regression to quantitatively analyze those correlations. For all the results from the statistical analyses, a p-value less than 0.05 was considered to indicate the statistical significance.
§ RESULTS
§.§ Overview of School Bullying
Of the 95,545 primary and secondary school students surveyed, 68,342 (71.6%) reported that they experienced school bullying of varying types and degrees. Based on the MPVS scores, we classified students who experienced school bullying into two categories by the degree of victimization: mild victimization (MPVS score less than or equal to 16) and severe victimization (MPVS score larger than 16). Among those who experienced school bullying, 56,372 and 11,970 students experienced mild and severe bullying, accounting for 59% and 12.5% of the total sample, respectively (see Table <ref>). Table <ref> reveals that there are overlaps among different types of bullying, namely some students may have experienced multiple types of bullying, as indicated in Figure <ref>.
§.§ Overall Impact of Being Bullied
Table <ref> lists the eight dimensions of psychological problems. Among all participants, the prevalence of severe emotional and behavioral problems (SDQ-S) was 11.4%; 16.5% were found to be at risk of mental illness (PQ-16); 12.9% reported moderate to severe anxiety (GAD-7); 22.6% showed relatively strong stress responses, with 4.9% at risk of developing PTSD (CRIES-13); 6.1% of students had poor or very poor sleep quality (PSQI); 7.7% experienced internet addiction (IGDS9-SF); 16.2% had moderate to severe depression (PHQ-9); and 28.1% had low or very low levels of mental health (WEMWBS). These findings indicate that the psychological conditions of Chinese primary and secondary school students are poor as a whole. The results of the chi-square tests showed significant correlations between experiencing bullying and all eight psychological problems, that is, students who were bullied had significantly poorer psychological conditions than those who were not.
§.§ The Impact of Bullying Severity
After establishing significant correlations between bullying and mental health, we further analyzed correlations between the bullying severity and the presence of psychological problems. As shown in Table <ref>, students who experienced mild bullying and students who experienced severe bullying significantly differed in terms of psychological conditions. The chi-square tests revealed that the severity of bullying is significantly correlated with seven psychological dimensions, except for the mental well-being, demonstrating that a higher severity of bullying experiences is associated with poorer psychological conditions.
To summarize the results from Tables <ref> and <ref>, students who experienced school bullying had generally poorer psychological conditions and a significantly higher risk of developing mental illnesses compared to students who did not experience it. For example, the proportion of students at risk of prodromal psychosis was 4.5% among those who never experienced school bullying, while it was 21.2% among those who experienced it, with 16.1% and 45.3% for those who experienced mild and severe bullying, respectively. All dimensions of psychological problems, with the exception of mental well-being, showed a consistent pattern of a higher risk of developing mental illness in those who experienced more severe bullying.
§.§ The Impact of Different Types of Bullying
Among all participants, 30,720, 57,104, 43,028, and 54,729 subjects reported experiencing physical victimization, verbal victimization, social manipulation, and attacks on property, respectively, accounting for 44.9%, 83.6%, 73.7%, and 80% of all victims. Some participants reported experiencing multiple types of bullying. However, as shown in Table <ref>, there was no significant difference in the psychological conditions of victims in terms of the type of bullying. The chi-square tests confirmed no significant correlation between the type of school bullying and psychological conditions. In a word, the occurrence of psychological problems among victims is related to the extent of their exposure to bullying but not the type of bullying.
§.§ Differences in School Bullying at Different Grade Levels
Based on the grade levels (primary, junior high, or senior high school) of the students, we analyzed the differences in the proportion of students who experienced school bullying, as well as the types and degrees of bullying. As shown in Table 6, the proportions of students in primary school, junior high school, and high school who experienced school bullying were 71.6%, 67.4%, and 71.8%, respectively. As confirmed by the chi-square tests, there were no significant differences between the three groups.
In primary school, 16,470 (60.7%) students experienced physical victimization. In contrast, the proportions of such students in junior high school and high school were significantly lower, at 32.3% and 24.4%, respectively. A significantly higher proportion of primary school students experienced severe bullying than junior high school and senior high school students, and a significantly higher proportion of junior high school students experienced severe bullying than senior high school students. The chi-square tests validated the statistical significance of the above two correlations. Based on these findings, we recommend paying particular attention to school bullying incidents in primary schools, especially those involving physical victimization.
§.§ Relationship Between School Bullying and the Probability of Psychological Illnesses
Having established the significant correlations between school bullying experience and psychological problems, we sought to demonstrate the impact of school bullying experience on the risk of developing psychological illnesses in a more intuitive way. To achieve this, we divided the participants into two groups: those with and those without the psychological illness for each dimension, according to Table <ref>. Next, we performed a logistic regression on each dimension. Table <ref> presents the results, with the odds ratio indicating the likelihood of having the corresponding psychological illness for the group that experienced school bullying compared to the group that did not. For example, as shown in Table <ref>, the odds of developing emotional and behavioral disorders for students who experienced mild and severe school bullying were 3.10 times and 11.35 times higher, respectively, than for students who did not experience bullying. These results indicate that experiencing school bullying significantly increases the probability of developing psychological illnesses, and this probability increases with the severity of bullying.
§ CONCLUSION AND DISCUSSION
In this study, we investigated the status of school bullying among primary and secondary school students in Zigong City. The results showed that of the 95,545 surveyed students, 71.6% (68,342) experienced school bullying of varying degrees, and among students who were bullied, 17.5% (11,970) experienced severe bullying. This indicates that school bullying is a prevalent issue among primary and secondary school students, with a higher incidence among the former. Additionally, the severity is more pronounced among primary school students than their secondary school counterparts. There are notable differences in the prevalence of school bullying across different countries. In a previous study, the percentage of students involved in school bullying across different countries ranged from 6.3% to 45.2% <cit.>. In comparison, our investigation reported much higher percentage. This difference may be attributed to the relatively higher levels of social development and implementation of preventive laws and regulations against school bullying in the European countries. Meanwhile, China, as a developing country, and Zigong, as an underdeveloped city in china, need more efforts to address this issue.
The chi-square tests and logistic regression showed that the occurrence and severity of school bullying are significantly correlated with all eight psychological dimensions. Compared with students who never experienced school bullying, students who experienced school bullying were more likely to have psychological problems, and the severity of bullying was positively correlated with a higher likelihood of psychological problems. These findings are consistent with previous research, which suggests that victims of school bullying may struggle to solve life problems and often have negative attitudes and poor interpersonal relationships <cit.>. Other studies indicate that the social pressure experienced by victims may result in a strong sense of threat, which may lead to psychological problems such as depression, anxiety, fear of attending school, and feelings of insecurity and dissatisfaction in school <cit.>. Additionally, the vigilance and stress response of teenagers during puberty may increase significantly, potentially increasing the risk of psychological problems in the conflicting environment <cit.>.
Studying the relationship between school bullying and psychological conditions has important social value and practical significance. It provides a better understanding of the origin of school bullying and how school bullying impacts students' mental health, thereby offering a scientific basis for preventing and reducing school bullying. It also helps in treating bullying-related psychological problems.
School bullying is a serious social problem that considerably affects the physical and mental health of victims, leading to emotional and behavioral problems, anxiety, depression, PTSD, and even extreme behaviors such as suicide. To reduce the occurrence of school bullying, we propose the following suggestions. First, school administrators, teachers, and parents should dearly understand what bullying is and its impacts on the physical and mental health. They should strengthen education on preventing bullying, closely monitor students, and pay attention to students’ daily lives. Second, both society and schools should take bullying seriously, promptly stop bullying incidents, and provide necessary help to students with psychological problems. Thirdly, after a bullying incident occurs, school administrators and parents should immediately pay attention to the safety of the victims, dig out the reason leading to the bullying incident, figure out the detailed process and property of the bullying incident, help the victims overcome their psychological trauma, improve their self-protection abilities, and prevent further occurrences. Fourth, educational managers and school administrators should prioritize creating a safer and more inclusive campus. Preventing school bullying should be a core part of developing a safe school. Technological means, routine surveys, and trained inquiry methods should be used to promptly identify and address potential school bullying incidents.
The present study has several strengths worth mentioning. First, the sample size was large, with 95,545 students surveyed, providing significant and stable statistical results. Second, the study was conducted in a developing country, making the findings more relevant to other developing countries. Third, the study differentiated between the four types and the severity of school bullying, and thus drew detailed conclusions. Finally, the study utilized multiple scales and questionnaires to measure various psychological problems, allowing for a comprehensive analysis of the quantitative relationships between school bullying and various psychological illnesses.
This study also has certain limitations. Firstly, the cross-sectional design precludes causal inferences about the relationship between school bullying and psychological problems and only suggests correlations. The focus of our future research should be on establishing causal relationships between school bullying and psychological problems through longitudinal studies. Secondly, the data were self-reported by the students, which may have subjective biases, potentially overestimating or underestimating the correlations. Future research should employ more objective measures to collect data on students’ psychological issues and bullying behaviors. Thirdly, our research focused solely on a specific city, and while it may have some representativeness, the comparative analysis of other Chinese cities during the same period is insufficient. Finally, it is worth noting that the occurrence of school bullying and psychological illnesses may not be evenly distributed among the population, which may have led to skewed probability estimates. Moreover, the COVID-19 pandemic may have largely impacted the findings. Therefore, future studies should consider more in-depth comparative analysis, using data from post-pandemic investigations, to provide a more accurate understanding of the relationship between school bullying and psychological problems among school students.
99
ref1Shuqing Xu, Jun Ren, Fenfen Li, Lei Wang, and Shumei Wang. School bullying among vocational school students in china: Prevalence and associations with personal, relational, and school factors. Journal of Interpersonal Violence, 37(1-2):NP104–NP124, 2022.
2Niklas Hamel, Susanne Schwab, and Sebastian Wahl. Bullying: Group differences of being victim and being bully and the influence of social relations. Studies in Educational Evaluation, 68:100964, 2021.
3Olweus Dan. School bullying: development and some important challenges. Annu Rev Clin Psychol, 9:751–80, 2013.
4S. Hymel and S. M. Swearer. Four decades of research on school bullying: An introduction. American Psychologist, 70(4):293, 2015.
5Helen Mynard and Stephen Joseph. Development of the multidimensional peer victimization scale. Aggressive Behavior, 26(2):169–178, 2000.
6Wendy Craig, Yossi Harel-Fisch, Haya Fogel-Grinvald, Suzanne Dostaler, and William Pickett. A cross-national profile of bullying and victimization among adolescents in 40 countries. Int J Public Health, 54 Suppl 2(S2):216–224, 2009.
7Alina Cosma, Sophie D Walsh, Kayleigh L Chester, Mary Callaghan, Michal Molcho, Wendy Craig, and William Pickett. Bullying victimization: time trends and the overlap between traditional and cyberbullying across countries in europe and north america. Int J Public Health, 65(1):75–85, 2020.
8Roshan Chudal, Elina Tiiri, Anat Brunstein Klomek, Say How Ong, Sturla Fossum, Hitoshi Kaneko, Gerasimos Kolaitis, Sigita Lesinskiene, Liping Li, Mai Nguyen Huong, et al. Victimization by traditional bullying and cyberbullying and the combination of these among adolescents in 13 european and asian countries. Eur Child Adolesc Psychiatry, 31(9):1391–1404, 2022.
9Supa Pengpid and Karl Peltzer. Bullying victimization and externalizing and internalizing symptoms among in-school adolescents from five asean countries. Children and Youth Services Review, 106:104473, 2019.
10Maili P¨orh¨ol¨a, Kristen Cvancara, Esta Kaal, Kristina Kunttu, Kaja Tampere, and Maria Beatriz Torres. Bullying in university between peers and by personnel: cultural variation in prevalence, forms, and gender differences in four countries. Soc Psychol Educ, 23(1):143–169, 2020.
11Yoke Yong Chen and Ask Elklit. Exposure to bullying among adolescents across nine countries. Journ Child Adol Trauma, 11(1):121–127, 2018.
12Ying Zhou, Lan Guo, Ci-Yong Lu, Jian-Xiong Deng, Yuan He, Jing-Hui Huang, Guo-Liang Huang, Xue-Qing Deng, and Xue Gao. Bullying as a risk for poor sleep quality among high school students in china. PLoS ONE, 10(3):e0121602, 03 2015.
13Robert Goodman, Tamsin Ford, Helen Simmons, Rebecca Gatward, and Howart Meltzer. Using the strengths and difficulties questionnaire (sdq) to screen for child psychiatric disorders in a community sample. The British Journal of Psychiatry, 177(6):534–539, 2000.
14T. J. Miller, T. H. Mcglashan, S. W. Woods, K. Stein, N. Driesen, C. M. Corcoran, R. Hoffman, and L. Davidson. Symptom assessment in schizophrenic prodromal states. Psychiatr Q, 70(4):273–287, 1999.
15R. Tennant, L. Hiller, R. Fishwick, S. Platt, S. Joseph, S. Weich, J. Parkinson, J. Secker, and S. Stewart-Brown. The warwick-edinburgh mental well-being scale (wemwbs): development and uk validation. Health Qual Life Outcomes, 5(1):63, 2007.
16Sean Perrin, Richard Meiser-Stedman, and Patrick Smith. The children’s revised impact of event scale (cries): Validity as a screening instrument for ptsd. Behavioural and Cognitive Psychotherapy, 33(4):487–498, 2005.
17Janet S. Carpenter and Michael A. Andrykowski. Psychometric evaluation of the pittsburgh
sleep quality index. Journal of Psychosomatic Research, 45(1):5–13, 1998.12
18D. J. Buysse, Cfr Iii, T. H. Monk, S. R. Berman, and D. J. Kupfer. The pittsburgh sleep quality index: a new instrument for psychiatric practice and research. Psychiatry Research, 28(2):193–213, 1989.
19H. M. Pontes and MD Griffiths. Measuring dsm-5 internet gaming disorder: Development and validation of a short psychometric scale. Computers in Human Behavior, 45:137–143, 2015.
20Lucia Monacis, Valeria de Palo, Mark D. Griffiths, and Maria Sinatra. Validation of the internet gaming disorder scale – short-form (igds9-sf) in an italian-speaking sample. Journal of Behavioral Addictions, 5(4):683–690, 2016.
21K. Kroenke, R. L. Spitzer, and Jbw Williams. The phq-9: validity of a brief depression severity measure. Journal of General Internal Medicine, 16(9):606–613, 2001.
22Concetta Esposito, Dario Bacchini, and Gaetana Affuso. Adolescent non-suicidal self-injury and its relationships with school bullying and peer rejection. Psychiatry Research, 274:1–6, 2019.
23Nieves Moyano and Maria del Mar Sanchez-Fuentes. Homophobic bullying at schools: A systematic review of research, prevalence, school-related predictors and consequences. Aggression and Violent Behavior, 53:101441, 2020.
24Stefanos Stylianos Plexousakis, Elias Kourkoutas, Theodoros Giovazolias, Kalliopi Chatira,
and Dimitrios Nikolopoulos. School bullying and post-traumatic stress disorder symptoms: The role of parental bonding. Frontiers in Public Health, 7:75, 2019
|
http://arxiv.org/abs/2306.09647v1
|
20230616063851
|
Environmental dependence of Type IIn supernova properties
|
[
"Takashi J. Moriya",
"Lluis Galbany",
"Cristina Jimenez-Palau",
"Joseph P. Anderson",
"Hanindyo Kuncarayakti",
"Sebastian F. Sanchez",
"Joseph D. Lyman",
"Thallis Pessi",
"Jose L. Prieto",
"Christopher S. Kochanek",
"Subo Dong",
"Ping Chen"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR"
] |
Type IIn SN properties and environments
Moriya, Galbany, et al.
National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan School of Physics and Astronomy, Faculty of Science, Monash University, Clayton, Victoria 3800, Australia Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain Institut d’Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain European Southern Observatory, Alonso de Córdova 3107, Casilla 19, Santiago, Chile Millennium Institute of Astrophysics MAS, Nuncio Monsenor Sotero Sanz 100, Off. 104, Providencia, Santiago, Chile Tuorla Observatory, Department of Physics and Astronomy, FI-20014 University of Turku, Finland Finnish Centre for Astronomy with ESO (FINCA), FI-20014 University of Turku, Finland Instituto de Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264, 04510 México, D.F., Mexico Department of Physics, University of Warwick, Coventry CV4 7AL, UK Núcleo de Astronomía de la Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Av. Ej ército 441, Santiago, Chile Department of Astronomy, The Ohio State University, 140 W. 18th Ave., Columbus, OH, 43210, USA Center for Cosmology and Astroparticle Physics (CCAPP), The Ohio State University, 191 W. Woodruff Ave., Columbus, OH, 43210, USA Kavli Institute for Astronomy and Astrophysics, Peking University, Yi He Yuan Road 5, Hai Dian District, Beijing 100871, China Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 234 Herzl St, 7610001 Rehovot, Israel
Type IIn supernovae occur when stellar explosions are surrounded by dense hydrogen-rich circumstellar matter. The dense circumstellar matter is likely formed by extreme mass loss from their progenitors shortly before they explode. The nature of Type IIn supernova progenitors and the mass-loss mechanism forming the dense circumstellar matter are still unknown. In this work, we investigate if there are any correlations between Type IIn supernova properties and their local environments.
We use Type IIn supernovae with well-observed light-curves and host-galaxy integral field spectroscopic data so that we can estimate both supernova and environmental properties. We find that Type IIn supernovae with a higher peak luminosity tend to occur in environments with lower metallicity and/or younger stellar populations.
The circumstellar matter density around Type IIn supernovae is not significantly correlated with metallicity, so the mass-loss mechanism forming the dense circumstellar matter around Type IIn supernovae might be insensitive to metallicity.
Environmental dependence of Type IIn supernova properties
Takashi J. Moriya1,2E-mail: [email protected] (TJM),
Lluís Galbany3,4E-mail: [email protected] (LG),
Cristina Jiménez-Palau3,4,
Joseph P. Anderson5,6,
Hanindyo Kuncarayakti7,8,
Sebastián F. Sánchez9,
Joseph D. Lyman10,
Thallis Pessi11,5,
Jose L. Prieto11,6,
Christopher S. Kochanek12,13,
Subo Dong14,
Ping Chen15
Received 19 April 2023; accepted 15 June 2023
=========================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Type IIn supernovae (SNe IIn) occur when stars explode within a dense hydrogen-rich circumstellar matter (CSM, ). The dense CSM is created by strong mass loss from the progenitors with typical mass-loss rate estimates of more than 10^-4 <cit.>. Such mass-loss rates are much higher than those measured for typical stars <cit.>, and the progenitors and mass-loss mechanisms of SNe IIn are still not well understood. It is suggested that the high mass-loss rates are similar to those of massive (≳ 25) luminous blue variable stars (LBVs, e.g., ). Indeed, the progenitor of the Type IIn SN 2005gl is consistent with a massive LBV <cit.>. On the other hand, the progenitor of the Type IIn SN 2008S was relatively low mass (≃ 10, ). These events suggest that progenitors and mass-loss mechanisms of SNe IIn are diverse. In addition, SNe Ia are sometimes hidden below dense hydrogen-rich CSM and observed as SNe IIn <cit.>.
The local environments where SNe explode provide rich information on their progenitors (see for a review). For example, SNe II and Ibc but not SNe Ia preferentially occur in star-forming environments, which indicates that SNe II and Ibc are associated with massive star explosions <cit.>. There have been several studies of the local environments of SNe IIn. <cit.> found that their locations are not necessarily associated with the most actively star-forming regions in their host galaxies and some may not be associated with very massive progenitors. A later study by <cit.> found that 60% of SNe IIn originate from actively star-forming regions and could be linked to very massive progenitors such as LBVs. The remaining 40% were not correlated with ongoing star formation and could have relatively low-mass progenitors <cit.>. Similarly, <cit.> estimated the age distributions of SN IIn progenitors based on spectra of their surroundings and found that they may have a bimodal age distribution with one peak at 0-20 Myr and the other at 100-300 Myr. These studies suggest that SN IIn progenitors are a mixture of massive (≳ 25 like LBVs) and low-mass (≃ 10 like the progenitor of SN 2008S) stars.
Other local properties may also provide information on their nature.
For example, <cit.> investigated the relationship between SN IIn progenitor mass-loss rates and their local metallicity. They found that the progenitors of SNe IIn may have higher mass-loss rates in higher metallicity environments.
In this work, we explore the environmental dependence of SN IIn properties by using SNe IIn with integral-field spectroscopy (IFS) of their host galaxies.
The IFS data allow us to estimate not only the metallicity but also environmental parameters such as the local star-formation rates (SFRs).
We introduce our SN IIn samples in Section <ref>. We estimate the local environmental parameters of the SN IIn explosion sites in Section <ref> and estimate the SN IIn properties in Section <ref>. We investigate possible correlations between the environmental and SN properties in Section <ref> and discuss them in Section <ref>. We conclude this paper in Section <ref>. We adopt a ΛCDM cosmology with H_0 = 68.3 km s^-1 Mpc^-1, Ω_M = 0.28, and Ω_Λ = 0.72 <cit.>.
§ SAMPLE DEFINITION
We constructed our sample using all galaxies observed with IFS from the PISCO, AMUSING, and MaNGA surveys to host a Type IIn SN.
The PMAS/PPak Integral field Supernova hosts COmpilation (PISCO; ) is a compilation of IFS observations of more than 400 SN host galaxies obtained with the Potsdam Multi Aperture Spectograph (PMAS; ) on the 3.5m telescope of the Centro Astronomico Hispano-Aleman (CAHA) at the Calar Alto Observatory. The observations were obtained in PPak mode <cit.>.
About a third of the objects were observed by the CALIFA survey <cit.>.
Each observation consists of a 3D datacube with a 100% covering factor within a hexagonal field-of-view (FoV) of ∼1.3 arcmin^2, with 1"×1" spatial pixels (spaxel) and a spectral resolution of ∼6 Å over the wavelength range 3750-7300 Å, providing ∼4000 spectra per object.
The All-weather MUse Supernova Integral-field of Nearby Galaxies (AMUSING; ; ; Galbany et al. in prep.) survey has been running for 11 semesters (P95-P106), and has compiled observations for more than 800 nearby SN host galaxies with the Multi-Unit Spectroscopic Explorer (MUSE; ), located at the Nasmyth B focus of Yepun, the VLT UT4 telescope at Cerro Paranal Observatory.
MUSE is composed of 24 identical IFS. Wide Field Mode (WFM) samples a nearly contiguous 1 arcmin^2 FoV with spaxels of 0.2 × 0.2 arcsec, and over a wavelength range of 4650-9300 Å with a mean resolution of R∼3000. Each 3D cube consists of ∼100,000 spectra per pointing.
The Mapping Nearby Galaxies at APO (MaNGA; )
was part of Sloan Digital Sky Survey (SDSS) IV () and
obtained IFS data of ∼10,000 nearby galaxies using 17 units of different hexagonal FoVs ranging from 12 to 32 arcsec in diameter at the 2.5m SDSS telescope at the Apache Point Observatory, in New Mexico.
The square spaxels are of 0.5 arcsec across, with a spectral resoulution of R∼2000 over a wavelength range of 3600-10000 Å.
After a thorough search of these three datasets, we compiled an initial sample of 66 SN IIn host galaxies where the SN location was within the FoV.
Next, we performed a thorough search for public light-curves of the 66 SNe IIn in the literature. For those objects that exploded in 2016 or after, we also used the ATLAS forced photometry service[https://fallingstar-data.com/forcedphot/queue/https://fallingstar-data.com/forcedphot/queue/] to obtain light curves. For those objects that exploded in 2018 or later, we utilized the ZTF forced photometry service[https://ztfweb.ipac.caltech.edu/cgi-bin/requestForcedPhotometry.cgihttps://ztfweb.ipac.caltech.edu/cgi-bin/requestForcedPhotometry.cgi].
From the 24 SNe IIn with publicly available data, only 17 had light-curves with enough quality and sampling during the rise to reliably determine the peak magnitude and rise time from explosion (see Section <ref>).
In addition, we obtained light curves with good sampling from All-Sky Automated Survey for SuperNovae (ASAS-SN, ) and follow-up observations with the Las Cumbres Observatory Global Telescope network (LCOGT) for four additional SNe IIn (ASASSN-15ab, ASASSN-16bw, ASASSN-16in, and ASASSN-16jt). The LCOGT photometry was performed according to the procedures described in <cit.>.
The final 21 SNe IIn in our sample are listed in Table <ref>.
§ LOCAL ENVIRONMENTS
The final sample of 21 SNe IIn is composed of 13 host galaxies observed with MUSE, five with PMAS and three with MaNGA. Synthetic r-band images created from the IFS cubes are displayed in Figure <ref>.
We followed a similar procedure for all 3 IFS instruments.
We extracted a rest-frame 2.7 arcsec diameter aperture spectrum for each SN position, corresponding to the worst spatial resolution in all the cubes.
We analyzed the spectra as in <cit.>.
We fit single stellar population (SSP) synthesis models to remove the underlying stellar continuum from the ionized gas-phase emission using STARLIGHT <cit.>.
STARLIGHT determines the fractional contribution of different SSP models to the spectrum, accounting for dust extinction as a foreground screen.
We use three parameters from the SSP fit: the stellar mass (M_*), the average light-weighted stellar population age (t_*,L), and the extinction derived from the stellar component (A_V*).
The best fit continuum model is then subtracted from each observed spectrum to leave the ionized gas-phase emission.
Figure <ref> shows the aperture spectra, the best SSP fits, and their resulting gas-phase emission line spectra for all 21 SN IIn environments.
We fit the emission lines needed to estimate oxygen abundances using several different methods.
This included fitting Gaussian profiles to the Balmer and lines, and the , , λλ6716,31 lines.
The and lines were fit simultaneously with as three Gaussian profiles with fixed positions and similar width, but free amplitudes.
In seven cases of relatively recent SNe (SN 2016bdu, SN 2016iaf,
ASASSN-16bw,
ASASSN-16in,
ASASSN-16jt,
SN 2017ghw, SN 2017hcc; see Figure <ref>) it was necessary to include a fourth component to account for a broad underlying emission coming from the CSM interaction (see also Martínez-Rodríguez in prep.).
The flux of the emission lines was corrected for dust extinction along the line of sight using the color excess (E(B-V)) estimate from the Hα/Hβ Balmer line flux ratios assuming the Case B recombination intrinsic ratio I(Hα)/I(Hβ)=2.86 for T=10,000 K and an electron density of 10^2 cm^-3 <cit.>, and a <cit.> extinction law.
The ongoing SFR can be directly estimated from the extinction-corrected Hα flux following <cit.>,
SFR [ yr^-1] = 7.9 × 10^-42 L(Hα),
where
L(Hα) = 4π d_L^2 F(Hα),
is the extinction-corrected Hα luminosity in units of erg s^-1 and d_L is the luminosity distance to the galaxy. The SFR density (Σ_SFR) is obtained by dividing the SFR by the area of the aperture in kpc^2, and the specific SFR (sSFR) is obtained by dividing the SFR by the stellar mass obtained from the fit.
While the Hα flux is an indicator of the ongoing SFR, the Hα equivalent width, EW(Hα), is a measurement of how strong the emission line is compared with the stellar continuum. The stellar continuum is dominated by the contribution from old stars, which also contain most of the stellar mass. The EW(Hα) represents the fraction of young stars, and it can be thought of as an indicator of the strength of the ongoing SFR compared with the past SFR and it decreases with time if no ongoing star-formation is present. It is a reliable proxy for the age of the youngest stellar components <cit.>.
To estimate EW(Hα), we divided the observed spectrum by the fit, and repeated the weighted
nonlinear least-squares fit of the Hα line in the normalized spectra.
The most commonly used metallicity indicator in interstellar medium (ISM) studies is the oxygen abundance, since it is the most abundant metal in the gas phase and has very strong optical nebular lines. We estimated the oxygen abundances, , using three different empirical calibrations based on emission-line ratios. In particular, we used the N2 index with the <cit.> calibrations updated from <cit.> based on the ratio,
12 + log(O/H)_N2 = 8.743 + 0.462 ×log[Nii]/ Hα,
and the O3N2 index based on the difference between the logs of the and ratios,
12 + log(O/H)_O3N2 = 8.533 - 0.214 ×log([Oiii]/ Hβ Hα/[Nii]).
Finally, we used the sulphur-based calibrator from <cit.> based on the and ratios,
12 + log(O/H)_D16 = 8.77 + y + 0.45 × (y + 0.3),
where y = log [Nii]/[Sii] + 0.264 ×log [Nii]/ Hα.
All these calibrations are quite insensitive to extinction because the emission lines used for the ratio diagnostics are close in wavelength. The ratios are also little affected by differential atmospheric refraction (DAR), although DAR has been corrected for during data reduction.
The resulting metallicities are reported in Table <ref> and the other local environmental properties are summarized in Table <ref>.
Pearson correlation coefficients, their standard deviations, and p values.
12+log(O/H)_N2 12+log(O/H)_O3N2 12+log(O/H)_D16 logΣ_SFR EW(Hα) logsSFR ⟨log t_*,L⟩
7cNo host galaxy extinction correction
2*Rise time -0.17±0.27 -0.07±0.26 -0.22±0.25 -0.37±0.17 -0.01±0.25 0.18±0.19 -0.25±0.16
(p=0.49) (p=0.74) (p=0.22) (p=0.031) (p=0.91) (p=0.34) (p=0.097)
2*Peak mag. 0.67±0.08 0.56±0.13 0.60±0.09 0.16±0.26 -0.18±0.15 -0.36±0.15 0.42±0.10
(p=0.000011) (p=0.0010) (p=0.0000040) (p=0.63) (p=0.17) (p=0.027) (p=0.00032)
2*log A_∗ -0.39±0.20 -0.31±0.22 -0.37±0.16 -0.25±0.22 0.12±0.18 0.31±0.17 -0.33±0.12
(p=0.041) (p=0.13) (p=0.034) (p=0.28) (p=0.58) (p=0.067) (p=0.011)
7cHost galaxy extinction correction with E(B-V)
2*Rise time -0.16±0.27 -0.06±0.25 -0.21±0.25 -0.37±0.18 -0.01±0.25 0.19±0.19 -0.25±0.16
(p=0.51) (p=0.78) (p=0.24) (p=0.032) (p=0.92) (p=0.34) (p=0.10)
2*Peak mag. 0.66±0.08 0.54±0.12 0.57±0.09 0.06±0.25 -0.20±0.14 -0.37±0.15 0.39±0.11
(p=0.000016) (p=0.0010) (p=0.000049) (p=0.89) (p=0.15) (p=0.028) (p=0.0021)
2*log A_∗ -0.37±0.20 -0.28±0.22 -0.34±0.17 -0.20±0.22 0.12±0.19 0.31±0.17 -0.30±0.12
(p=0.051) (p=0.17) (p=0.047) (p=0.39) (p=0.58) (p=0.070) (p=0.018)
7cHost galaxy extinction correction with A_V*
2*Rise time -0.16±0.27 -0.06±0.25 -0.21±0.25 -0.37±0.18 -0.01±0.25 0.19±0.19 -0.25±0.16
(p=0.51) (p=0.78) (p=0.24) (p=0.032) (p=0.92) (p=0.34) (p=0.10)
2*Peak mag. 0.66± 0.09 0.55±0.13 0.60±0.08 0.20±0.25 -0.13±0.15 -0.31±0.17 0.42±0.10
(p=0.000013) (p=0.0015) (p=0.000022) (p=0.49) (p=0.34) (p=0.059) (p=0.00035)
2*log A_∗ -0.39±0.20 -0.30±0.22 -0.37±0.16 -0.28±0.21 0.09±0.18 0.29±0.18 -0.33±0.12
(p=0.043) (p=0.14) (p=0.037) (p=0.21) (p=0.69) (p=0.097) (p=0.011)
§ SN IIN PROPERTIES
SNe IIn are characterized by their high CSM density.
We assume that the CSM density is ρ_CSM=Ar^-2, where A is constant and r is the radius. Given a mass-loss rate (Ṁ) and a wind velocity (v_wind) of the progenitor, the constant is
A = Ṁ/4π v_wind.
Following convention <cit.>, we define
A_∗ = 1/4π(Ṁ/10^-6 )(v_wind/100 )^-1.
Assuming that shock breakout occurs inside the dense CSM, the rise time and peak luminosity can be related to the density <cit.>. Following <cit.>,
A = C_2^-n-2/nC_3^-n-2/4n-5ε^-n-2/4n-5κ^-3(n-1)/4n-5t_d^3(n-1)/4n-5L_p^n-2/4n-5,
where
C_2 = c^-1/n-2[2π(n-4)(n-3)(n-δ)[(3-δ)(n-3)]^n-5/2/[2(5-δ)(n-5)]^n-3/2]^1/n-2
×(n-2/n-3)^n-3/n-2,
C_3 = 2π/n-5c^n-5/n(n-2)[1/4π (n-δ)[2(5-δ)(n-5)]^n-3/2/[(3-δ)(n-3)]^n-5/2]^4n-5/n(n-2)
×[(n-4)(n-3)/2]^(n-1)(n-5)/n(n-2)(n-3/n-2)^(n-5)(n-3)/n(n-2),
ε is the conversion efficiency from kinetic energy to radiation at the shock, κ=0.34 cm^2 g^-1 is the electron scattering opacity in the CSM, t_d is the rise time, L_p is the peak bolometric luminosity, and c is the speed of light. Here, the SN ejecta density ρ_ejecta is assumed to have a two-component power-law structure (ρ_ejecta∝ r^-n outside and ρ_ejecta∝ r^-δ inside) with n=7 and δ=0 <cit.>. We assume a constant ε=0.3 in our analysis (e.g., , but see also ).
This formalism is applicable to bolometric light curves. However, it is difficult to estimate bolometric luminosity without extensive multi-wavelength observations and such observations are rarely available. Here, we use observed light curves in the o filter (5600-8200 Å), the R band filter (5500-8600 Å), or the r band filter (5600-7300 Å) to estimate the rise time and peak luminosity. We do not include a bolometric correction, because the bolometric correction near luminosity peak in this wavelength range is estimated to be small (e.g., around -0.3 mag in the R band for SN 2010jl, ). In the case of ASASSN-15ab and ASASSN-16in, we use V band (4800-6400 Å) light-curves that provide better constraints on the rising part of the light curve.
The light-curves are corrected for the Galactic extinction. The host galaxy extinction is uncertain. Although we estimated E(B-V) and A_V* from the host galaxy spectra, they do not necessarily represent the extinction at the exact SN location. Here, we assume three cases: no host extinction, the host extinction correction with E(B-V), and the host extinction correction with A_V*. We find that our results are independent of the choice of the host galaxy extinction. We discuss the case without the host galaxy extinction in the following sections.
The rise time and peak luminosity of our SN IIn sample are estimated using the method developed by <cit.>. We fit the rising part of the light-curves to estimate t_d and L_p,
L(t) = L_p [1-(t-t_peak/t_d)^2],
where t_peak is the time of the luminosity peak. The fits are shown in Fig. <ref>, and the estimated rise times and peak luminosities are summarized in Table <ref>. Because of the uncertainties in the rise time and the peak luminosity caused by the distance uncertainties and bolometric corrections, we assume a 1σ uncertainty of 3 days and 0.3 mag in the rise time and peak luminosity, respectively.
Table <ref> also includes the CSM density estimates. We also show the corresponding mass-loss rates for v_wind=100. The estimated mass-loss rates with v_wind=100 range from ∼ 10^-3 to ∼ 10^-2, and they are consistent with previous studies <cit.>. In the following analysis, we assume an 0.5 dex uncertainty in CSM density estimates to account for possible systematic uncertainties as well as the uncertainties in estimating rise time and peak luminosity.
§ ENVIRONMENTAL DEPENDENCE
Using the SN IIn environmental properties (Section <ref>) and SN IIn properties (Section <ref>), we next investigate if there exist any correlations among them. We evaluate the Pearson correlation coefficient ρ to determine the existence and strength of correlations. We employ 10^6 bootstrapping simulations and derive the Pearson correlation coefficient, its standard deviation, and the p value for each. Each bootstrapping simulation is performed by randomly selecting 21 SNe allowing multiple selections of the same SN IIn.
Table <ref> summarizes the Pearson correlation coefficients, their standard deviations, and the p values for each.
One statistically significant correlation is a positive correlation between the peak magnitude and all three metallicity indicators. This means that more luminous SNe IIn tend to appear in lower metallicity environments. Figure <ref> illustrates the correlation. The other significant correlation is a very weak positive correlation between the peak magnitude and the average light-weighted stellar population age (⟨log t_*,L⟩). In other words, more luminous SNe IIn prefer to occur in environments with younger stellar populations (Fig. <ref>). We also found that metallicity and average light-weighted stellar population age might be weakly correlated (Fig. <ref>). Thus, it is not clear if the peak luminosity correlation is driven by metallicity, stellar population age, or both. Because we found stronger correlations with metallicity, it is possible that metallicity difference is the main cause of the correlation.
It is worth noting that we do not find significant correlations between metallicity and CSM density (Fig. <ref>). A very weak negative correlation between metallicity and CSM density (i.e., SNe IIn with higher metallicity tend to have less dense CSM) may exist, but it is still statistically marginal and depends on the metallicity indicator. Interestingly, no positive correlation is likely to exist. <cit.> previously investigated the metallicity dependence of mass-loss rates and wind velocities in SNe IIn. They concluded that SNe IIn from higher metallicity environments have higher mass-loss rates and wind velocities. Figure <ref> shows the CSM density estimates from the SNe IIn used in their analysis. The mass-loss rates and wind velocities in <cit.> estimates taken from a range of sources using different methodologies, and are not necessarily estimated in a consistent way. Nonetheless, we do not find a significant correlation in the CSM density and metallicity in their sample, either. Our results show that, although mass-loss rates and wind velocities may have metallicity dependence as proposed by <cit.>, the CSM density (A∝Ṁ/v_wind) is not significantly metallicity dependent.
For the other combinations of the parameters, we do not find any statistically significant correlations. There may be other very weak correlations such as between the rise time and logΣ_SFR, between the peak magnitude and logsSFR, and between log A_∗ and ⟨log t_*,L⟩. More SNe IIn are required to determine the validity of any additional correlations.
§ DISCUSSION
We found that there is a negative correlation between metallicity and peak luminosity of SNe IIn in the sense that more luminous SNe IIn are associated with lower metallicity environments. We also found a weak negative correlation between stellar population age and peak luminosity. The luminosity of SNe IIn can be characterized by ε E_kin/t_d, where E_kin is the kinetic energy in the shocked SN ejecta up to the time of the luminosity peak <cit.>. We found that rise time, which is related to t_d, does not correlate with metallicity or stellar age. The conversion efficiency ε is not likely to be sensitive to metallicity and stellar population age, although it could be higher for higher metallicities because of more efficient cooling. Thus, the negative correlation could be caused by the fact that SNe IIn tend to have higher explosion energy at lower metallicity environments and/or younger stellar populations. Because higher mass progenitors tend to have higher explosion energies <cit.>, it may be natural to expect that SNe IIn from younger stellar populations to have higher explosion energies. However, we do not find any correlations between EW(Hα) and peak luminosity. It is also possible that SN IIn progenitor masses tend to be higher at lower metallicity.
We did not find a significant correlation between metallicity and CSM density. This is interesting because some mass-loss mechanisms predict a positive correlation between mass-loss rate and metallicity. For example, in the case of hot massive stars, <cit.> find that
log(Ṁ/) = -5.55 + 0.79 log(Z/Z_⊙)
+ [2.16-0.32log(Z/Z_⊙) ]log(L/10^6 L_⊙),
with
v_wind∝ Z^p(L) and p(L) = -0.41 log(L/10^6 L_⊙)-0.32.
Here, Z is metallicity and L is luminosity of a star. This leads to a CSM density factor scaling of
A∝ Z^1.11+0.09log(L/10^6L_⊙)L^2.16.
For a given luminosity, the CSM density is expected to positively correlate with the metallicity. In order to have no or negative correlations between A and Z, the SN IIn progenitor luminosity L could increase at low metallicity. Ignoring the small term 0.09log(L/10^6L_⊙) and assuming L∝ Z^α for SN IIn progenitors, we obtain A∝ Z^1.11+2.16α. Thus, α≲ -0.5 is required to have no or negative correlations between Z and A. If the progenitor luminosity is close to the Eddington luminosity (i.e., L∝ M), an increase in progenitor mass by a factor of around 2 for a metallicity increase by a factor of 0.3 would produce no correlations, for example.
In the case of cool stars such as RSGs, the metallicity dependence of Ṁ is not so clear. RSG mass-loss rates have been suggested to follow a relation of Ṁ∝ L^1.05Z^0.7 with v_wind∝ L^0.35 <cit.>, while <cit.> suggested no metallicity dependence for RSG mass-loss rates (Ṁ∝ L^0.9 with v_wind∝ ZL^0.4). The two prescriptions predict quite different CSM density dependences on metallicity with A∝ L^0.7Z^0.7 <cit.> or A∝ L^0.5Z^-1 <cit.>. In both cases, CSM density around RSGs is predicted to strongly depend on metallicity. Nonetheless, because of huge uncertainties in the metallicity dependence of RSG mass loss, it is difficult to judge from the metallicity dependence whether SN IIn progenitors are dominated by RSGs or not. Additional investigations into the metallicity dependence of RSG mass loss are required.
Because of their high mass-loss rates, the progenitors of SNe IIn may actually have optically-thick winds forming a dense CSM. Mass-loss rates and wind velocities from optically-thick winds are also predicted to be metallicity dependent, but their dependence may also compensate to have a metallicity-independent CSM density <cit.>.
It is also possible that the normal mass-loss mechanisms for hot and cool stars are irrelevant for SN IIn progenitors. Their CSM density may be driven by a totally different mass-loss mechanism that is not strongly affected by metallicity. Precursors observed in some SNe IIn <cit.> may indeed indicate that their mass-loss mechanism is quite different from those of metallicity-dependent steady winds
discussed above. For example, continuum-driven winds are not expected to have a metallicity dependence <cit.>. Further investigation of the environmental dependence of SN IIn properties would help understanding such an unknown mass-loss mechanism in SNe IIn.
Another possibility to explain the apparent lack of a metallicity dependence is that the CSM density actually depends on the metallicity, but we do not find it clearly because the CSM density needs to be high enough to be observed as SNe IIn. We might be simply biased to SNe having a CSM density above a certain metallicity-independent threshold by observing SNe IIn. In such a case, the apparent lack of the metallicity dependence would simply be an observational bias.
§ CONCLUSIONS
Using 21 SNe IIn with good light-curves and local IFS data, we investigated the relationship between local environments and SN properties. We found that SNe IIn with a higher peak luminosity tend to be in environments with lower metallicities and stellar population ages. Because metallicity and stellar population age are correlated in our sample, it is unclear if metallicity, stellar population age, or both drive the correlations. The correlations may indicate that SNe IIn have higher explosion energies in environments with lower metallicity and/or younger stellar ages.
We did not find statistically significant correlations between local metallicity and CSM density around SNe IIn. There might be a very weak negative correlation, but no positive correlation exists. This indicates that the mass-loss mechanism triggering the formation of dense CSM around SNe IIn could be metallicity independent. Alternatively, SN IIn progenitor mass range may depend on metallicity. It is also possible that the lack of the metallicity dependence is an observational bias due to needing a minimum threshold CSM density to be classified as a SN IIn.
Our study is based on 21 SNe IIn. Some correlations are still not significant and further confirmation is required. In addition, it is possible that some bias exist in our samples. Thus, a similar study with larger numbers of SNe IIn is encouraged. Wide-field high-cadence transient surveys are increasing the number of well-observed SNe IIn. Follow-up observations to obtain local environment information to increase the sample size will be important in uncovering the mysterious nature of SNe IIn.
We thank the anonymous referee for thoughtful comments.
This work was supported by the NAOJ Research Coordination Committee, NINS (NAOJ-RCC-2201-0401).
TJM is supported by the Grants-in-Aid for Scientific Research of the Japan Society for the Promotion of Science (JP20H00174, JP21K13966, JP21H04997).
L.G. acknowledges financial support from the Spanish Ministerio de Ciencia e Innovación (MCIN), the Agencia Estatal de Investigación (AEI) 10.13039/501100011033, and the European Social Fund (ESF) "Investing in your future" under the 2019 Ramón y Cajal program RYC2019-027683-I and the PID2020-115253GA-I00 HOSTFLOWS project, from Centro Superior de Investigaciones Científicas (CSIC) under the PIE project 20215AT016, and the program Unidad de Excelencia María de Maeztu CEX2020-001058-M.
H.K. was funded by the Academy of Finland projects 324504 and 328898.
JDL acknowledges support from a UK Research and Innovation Fellowship (MR/T020784/1).
We acknowledge the Telescope Access Program (TAP) funded by the NAOC, CAS and the Special Fund for Astronomy from the Ministry of Finance. SD acknowledges Project number 12133005 supported by National Natural Science Foundation of China (NSFC) and the Xplorer Prize.
This work is supported by the Japan Society for the Promotion of Science Open Partnership Bilateral Joint Research Project between Japan and Chile (JPJSBP120209937, JPJSBP120239901).
This work was funded by ANID, Millennium Science Initiative, ICN12_009.
Based on observations collected at the Centro Astronómico Hispano en Andalucía (CAHA) at Calar Alto, operated jointly by Junta de Andalucía and Consejo Superior de Investigaciones Científicas (IAA-CSIC).
Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes
096.D-0296,
0100.D-0341,
0103.D-0440,
0101.D-0748,
196.B-0578, and
1100.B-0651.
This research was partly supported by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC-2094 – 390783311.
§ FIGURES OF THE SN ENVIRONMENTS
We present supplementary figures presenting each SN environment. Figure <ref> shows SN host galaxies with SN locations, and Fig. <ref> shows their spectra used for SN environment parameter estimations.
§ LIGHT-CURVE FITTING RESULTS
The results of light-curve fitting that are used to estimate rise time and peak luminosity of our SN IIn sample are presented in Fig. <ref>. The fitting formula is Eq. (<ref>).
aa
|
http://arxiv.org/abs/2306.02796v1
|
20230605114636
|
MCTS: A Multi-Reference Chinese Text Simplification Dataset
|
[
"Ruining Chong",
"Luming Lu",
"Liner Yang",
"Jinran Nie",
"Shuhan Zhou",
"Yaoxin Li",
"Erhong Yang"
] |
cs.CL
|
[
"cs.CL"
] |
UTF8gkai
Residential flexibility characterization and control using global forecasting models
Vasco Medicilabel2
====================================================================================
Text simplification aims to make the text easier to understand by applying rewriting transformations.
There has been very little research on Chinese text simplification for a long time.
The lack of generic evaluation data is an essential reason for this phenomenon.
In this paper, we introduce MCTS, a multi-reference Chinese text simplification dataset.
We describe the annotation process of the dataset and provide a detailed analysis of it.
Furthermore, we evaluate the performance of some unsupervised methods and advanced large language models.
We hope to build a basic understanding of Chinese text simplification through the foundational work and provide references for future research.
We release our data at <https://github.com/blcuicall/mcts>.
§ INTRODUCTION
The task of text simplification aims to make the text easier to understand by performing multiple rewriting transformations.
It can provide reading assistance for children <cit.>, non-native speakers <cit.> and people with language disorders <cit.>.
Moreover, text simplification can also be used as a method of data augmentation to benefit downstream natural language processing (NLP) tasks <cit.>.
For a long time, the research of text simplification systems mainly depends on large-scale parallel corpora for training, such as WikiLarge <cit.> and Newsela <cit.>.
But due to the limitation of existing data in language and domain, recent work on text simplification systems has started to focus on unsupervised methods and achieves good results <cit.>, which makes it possible to build Chinese text simplification systems independent of large-scale parallel corpora.
In this case, how to evaluate the Chinese text simplification systems becomes a problem to be solved.
On the other hand, large language models have the ability to solve various NLP tasks <cit.>.
Recently a series of large language models represented by ChatGPT [<https://chat.openai.com/chat>] performs well on many tasks <cit.>.
In English text simplification, Feng et al. feng:23 find that large language models outperform state-of-the-art methods and are judged to be on par with human annotators.
Nevertheless, whether these models can achieve the same excellent results in Chinese text simplification remains unclear.
To solve these problems, in this paper, we introduce MCTS, a multi-reference dataset for evaluating Chinese text simplification models.
MCTS consists of 3,615 human simplifications associated with 723 original sentences selected from the Penn Chinese Treebank <cit.> (5 simplifications per original sentence).
We hope to use this dataset to measure the development status of Chinese text simplification and provide references for future research.
We design several simple unsupervised Chinese text simplification methods and test them on our proposed dataset.
These methods can be served as the baselines for future studies.
Furthermore, we evaluate the Chinese text simplification ability of the most advanced large language models, GPT-3.5 and ChatGPT.
The results show that these large language models could outperform the unsupervised methods we set up.
However, compared to human written simplification, there is still a certain gap.
In summary, our contributions are listed below:
* We manually annotated a dataset that can be used for the evaluation of Chinese text simplification.
It is a multi-reference dataset and contains multiple types of rewriting transformations.
* We provide several text features and conducted a detailed analysis of the dataset, which could help to understand the characteristics of human Chinese text simplification.
* On the proposed dataset, we evaluated the performance of some unsupervised methods and large language models, which could serve
as the baselines for future research.
§ RELATED WORK
§.§ Evaluation Data for English text simplification
Early evaluation data for English text simplification mainly consist of sentence pairs obtained from English Wikipedia and Simple English Wikipedia through automatic sentence alignment.
However, the Simple English Wikipedia was found to contain a large proportion of inadequate or inaccurate simplifications <cit.>.
And it is problematic to evaluate simplification systems with only a single reference because there are several ways of simplifying a sentence.
For the above reasons, Xu et al. xu:16 introduced TurkCorpus, a multi-reference dataset for the evaluation of English text simplification.
They first collected 2,359 original sentences from English Wikipedia and then obtained 8 manual reference references for every original sentence via crowdsourcing.
The dataset can be used for evaluation metrics requiring multiple references, such as BLEU <cit.> and SARI <cit.>.
However, the rewriting transformations involved in TurkCorpus are very simple. Annotators were asked to simplify a sentence mainly by lexical paraphrasing but without deleting content or splitting the sentences.
While another multi-reference dataset for English text simplification, HSplit <cit.>, only contains the rewriting transformations of sentence split, which uses the same original sentences in the test set of TurkCorpus.
In order to involve multiple transformations, Alva-Manchego et al. alva:20 created the ASSET dataset.
Using the same origin sentences, They extended TurkCorpus through crowdsourcing.
The dataset includes rewriting transformations of lexical paraphrasing (lexical simplification and reordering), sentence splitting, and compression (deleting unimportant information).
ASSET now has been adopted as a standard dataset for evaluating English text simplification systems.
Similar to ASSET, MCTS is a dataset with multiple references and multiple rewriting transformations.
To our best knowledge, it is the first multi-reference dataset used for Chinese text simplification evaluation.
§.§ Unsupervised Text Simplification
Unsupervised text simplification methods do not require aligned complex-simple sentence pairs.
Sai Surya et al. surya:19 first attempted to realize an unsupervised neural text simplification system by importing adversarial and denoising auxiliary losses.
They collected two separate sets of complex and simple sentences extracted from a parallel Wikipedia corpus and trained on them with auto-encoders.
Lu et al. lu2021unsupervised found that during the process of neural machine translation, it is possible to generate more high-frequency tokens.
According to this finding, they built a pseudo text simplification corpus by taking the pair of the source sentences of the translation corpus and the translations of their references in a bridge language, which could be used to train text simplification models in a Seq2Seq way.
Martin et al. martin:22 leveraged paraphrase data mined from Common Crawl and used ACCESS <cit.>, a method to make any sequence-to-sequence model controllable, to generate simplifications and not paraphrases at test time.
Their method achieved good results and was considered the state-out-of-art unsupervised text simplification method.
§.§ Large Language Models
Compared to general pre-trained models, large language models are also typically based on the transformer architecture but are much larger in scale, such as GPT-3 <cit.>, PaLM <cit.> and OPT <cit.>.
They can handle various NLP tasks through the given instructions, which do not require any gradient updates <cit.>.
ChatGPT is obtained by fine-tuning a GPT-3.5 via reinforcement learning from human feedback (RLHF) <cit.>.
As a large language model for intelligent human-computer dialogue, it can answer user input with high quality.
ChatGPT has recently attracted significant attention from the NLP community, and there have been many studies on it <cit.>.
However, exploring these models in Chinese text simplification is still lacking.
§ CREATING MCTS
In this section, we describe more details about MCTS.
In section 3.1, we introduce the preparation of original sentences.
And in section 3.2, we introduce the annotation process of MCTS.
§.§ Data Preparation
We use Penn Chinese Treebank (CTB) as the source of the original sentence in the dataset.
CTB is a phrase structure tree bank built by the University of Pennsylvania.
It includes Xinhua news agency reports, government documents, news magazines, broadcasts, interviews, online news, and logs.
We first filtered out the simple sentences using a filter based on the average lexical difficulty level in HSK to ensure that the original sentences we choose are sufficiently complex.
Then we manually selected from the remaining sentences.
Finally, we obtained 723 news sentences as the original sentence.
§.§ Annotation Process
MCTS is an evaluation dataset that is completely manually annotated.
The detailed annotating process is as follows.
Annotator Recruitment
All the annotators we recruited are native Chinese speakers and are undergraduate or graduate students in school.
Most of them have a background in linguistics or computer science.
All annotators needed to attend a training course and take the corresponding Qualification Test (see more details below) designed for our task.
Only those who have passed the Qualification Test could enter the Annotation Round.
Simplification Instructions
We provided the exact instructions for annotators for the Qualification Test and the Annotation Round.
In the instructions, we defined three types of rewriting transformations.
* Paraphrasing: Replacing complex words or phrases with simple formulations.
* Compression: Deleting repetitive or unimportant information from the sentence.
* Structure Changing: Modifying complex sentence structures into simple forms.
Compared to rewriting transformations involved in ASSET, we replaced sentence splitting with structural changing.
The latter covers a broader range and is more consistent with the actual situation of simplifying Chinese sentences.
Besides, the paraphrasing transformation in Chinese is much more flexible than in English.
It includes not only the substitution of synonyms but also the interpretation of complex phrases or idioms.
For every rewriting transformation, we provided several examples.
Annotators could decide for themselves which types of rewriting to execute in any given original sentence.
Qualification Test
At this stage, we provided 20 sentences to be simplified.
Annotators needed to simplify these sentences according to the instructions given.
We checked all submissions to filter out annotators who could not perform the task correctly.
Of the 73 people who initially registered, only 35 passed the Qualification Test (48%) and worked on the task.
Annotation Round
Annotators who passed the Qualification Test had access to this round.
To facilitate annotating work, we provided a platform that can display the difficulty level of words in a text.
We collected 5 simplifications for each of the 723 original sentences.
Table <ref> presents a few examples of simplifications in MCTS, together with their English translation.
§ DATASET ANALYSIS
Following ASSET <cit.>, we report a series of text features in MCTS and study the simplifications in the dataset through them.
§.§ Text Features
We calculated several low-level features for all simplification examples to measure the rewriting transformations included in MCTS.
These features are listed below.
* Number of sentence splits: The difference between the number of sentences in the simplification and the number of sentences in the original sentence.
* Compression level: The number of characters in the simplification divided by the number of characters in the original sentence.
* Replace-only Levenshtein distance: The character-level Levenshtein distance <cit.> for replace operations only divided by the length of the shorter string in the original sentence and simplification. As described in ASSET, ignoring insertions and deletions can make this feature independent of compression level and serve as a proxy for measuring the lexical paraphrases of the simplification.
* Proportion of words deleted, added and reordered: The number of words deleted/reordered from the original sentence divided by the number of words in the original sentence; and the number of words that were added to the original sentence divided by the number of words in the simplification.
* Lexical complexity score ratio: We compute the score as the mean squared lexical difficulty level in HSK. The ratio is then the value of this score on the simplification divided by that of the original sentence, which can be considered as an indicator of lexical simplification.
* Dependency tree depth ratio: The ratio of the depth of the dependency parse tree of the simplification relative to that of the original sentence. Follwing ASSET <cit.>, we perform parsing using spaCy [<https://github.com/explosion/spaCy>]. This feature can reflect structural simplicity to a certain extent.
§.§ Results and Analysis
The density of all these features is shown in Figure <ref>.
We can see that sentence splitting operation appears not frequently on MCTS.
By observing the data, we believe that this is due to the characteristics of the Chinese.
Compound sentences are commonly used in Chinese and one sentence consists of two or more independent clauses.
During the simplification, annotators tend to rewrite a complex sentence with nested clauses into compound sentences rather than multiple simple sentences.
So this is not to say that Chinese text simplification rarely involves sentence structure change, but that the way of structural change is not limited to sentence splitting.
Although we introduced compression as a rewriting transformation in the simplification instructions, the compression ratio is not too concentrated on the side less than 1.0.
The reason is that, on the one hand, the annotators tend to retain as much semantic information as possible, and on the other hand, more characters may be added when paraphrasing.
By analyzing replace-only Levenshtein distance, we can see that the simplifications in MCTS have a considerable degree of paraphrasing the input as simplifications are distributed at all levels.
Regarding the distribution of deleted, added, and reordered words, we can find that the peaks all occur at positions greater than 0.0.
This further reveals the plentiful rewriting operations contained in MCTS.
In terms of lexical complexity, we can clearly see the high density of ratios less than 1.0, indicating that simplification has significantly lower lexical complexity compared to the original sentence.
Some instances have a lexical complexity ratio greater than 1.0, which may be due to deleted simple words in the process of sentence compression.
Finally, the dataset shows a high density of a 1.0 ratio in dependency tree depth. This may indicate that significant structural changes were not made.
§ EXPERIMENT
In order to measure the development status of Chinese text simplification and provide references for future research, we conducted a series of experiments on the proposed MCTS.
§.§ Methods
We attempt several unsupervised Chinese text simplification methods and large language models and provided their results on MCTS.
The first three are unsupervised methods that utilize automatic machine translation technology.
We use Google Translator [<https://translate.google.com/>] to translate.
These unsupervised methods can be used as the baselines for future work.
Direct Back Translation
As high-frequency words tend to be generated in the process of neural machine translation <cit.>, back translation is a potential unsupervised text simplification method.
We translated the original Chinese sentences into English and then translated them back to obtain simplified results.
We chose English as the bridge language because of the rich bilingual translation resources between Chinese and English.
Translated Wiki-Large
Translating existing text simplification data into Chinese is a simple way to construct pseudo data.
We translated English sentence pairs in Wiki-Large into Chinese sentence pairs and used them to train a BART-based <cit.> model as one of our baselines.
Cross-Lingual Pseudo Data
In addition to the above two methods, we also designed a simple way to construct pseudo data for Chinese text simplification, which can leverage the knowledge from English text simplification models.
As shown in Figure <ref>, we first collect a large amount of Chinese sentence data, for example, the People's Daily Corpus.
Then, we translate these sentences into English and simplify them using existing English text simplification models.
Finally, we translate the simplified English sentences back into Chinese and align them with the original Chinese sentences to obtain parallel data.
To ensure data quality, we filter the obtained parallel data from three aspects: simplicity, fluency, and semantic retention.
For simplicity, we calculate the average lexical difficulty level for both the original sentence and the simplified sentence.
Only when the difficulty level of the simplified sentence is significantly reduced compared to the original sentence, this parallel sentence pair will be retained.
For fluency, we calculate the perplexity for the simplified sentences and filter out sentences above the preset threshold.
For semantic retention, we use sentence-transformers toolkit <cit.> to calculate the semantic similarity between the original sentence and simplified sentence, and also filter out sentences that exceed the preset threshold.
Using the filtered data, we train a BART-base model.
Large Language Models
We chose two advanced large language models to conduct experiments: gpt-3.5-turbo and text-davinci-003.
Both of them are based on GPT-3.5.
The former is the most capable GPT-3.5 model and is optimized for chatting.
The latter is the previous model, which can execute any language task according to instructions.
We translated the simplification prompt used by Feng et al. feng:23 as our prompt.
More details about the prompt can be found in Table <ref>.
The experiment was conducted under the zero-shot setting.
§.§ Automatic Metrics
Following previous work, we choose three metrics for evaluation: SARI <cit.>, BLEU <cit.> and HSK Level <cit.>.
SARI
SARI <cit.> is a commonly used evaluation metric for text simplification.
Comparing system outputs to multiple simplification references and the original sentences, SARI calculates the mean of the n-gram F1 scores of add, keep, and delete.
In our experiment, we tokenize sentences using Stanford CoreNLP[<https://github.com/stanfordnlp/CoreNLP>] and use the EASSE toolkit [<https://github.com/feralvam/easse>] <cit.> to calculate SARI.
BLEU
BLEU (Bilingual Evaluation Understudy) <cit.> was initially used to evaluate the quality of machine translation.
By calculating the N-gram and counting the times that can be matched,
BLEU can reflect the closeness between system outputs and references.
Just like calculating SARI, we use the EASSE toolkit to calculate the BLEU score.
HSK Level
In order to measure the complexity of Chinese sentences, we import HSK Level.
HSK is the Chinese proficiency test designed for non-native speakers [<https://www.chinesetest.cn>].
It provides a vocabulary of nine levels from easy to difficult.
Following previous work <cit.>, we count the proportion of words at levels 1-3 and
7+ in system outputs.
The higher the proportion of words in levels 1-3 (7+), the easier (more challenging) the outputs are understood.
Our specific implementation of this metric is the same as Kong et al. kong2022multitasking.
§.§ Human Evaluation
In order to obtain more comprehensive evaluation results, we further conduct human evaluation.
Following the previous work <cit.>, we evaluate the Chinese text simplification systems on three dimensions:
* Fluency: Is the output grammatical?
* Adequacy: How much meaning from the original sentence is preserved?
* Simplicity: Is the output simpler than the original sentence?
We provide simplifications generated by different systems for the recruited volunteers.
And we ask the volunteers to fill out a five-point Likert scale (1 is the worst, 5 is the best) about these simplifications for each dimension.
Additionally, following Feng et al.'s work feng:23, we measure the volunteers' subjective choices by ranking the simplifications to focus on actual usage rather than evaluation criteria.
§ RESULTS
We divide all the 723 sentences in MCTS into two subsets: 366 for validation and 357 for testing the Chinese text simplification models.
In this section, we report the evaluation results on the test set of MCTS.
§.§ Results of Automatic Evaluations
The results of automatic evaluations are shown in Table <ref>.
In addition to the model results, we also report the score of the source and gold reference.
The source scores are calculated on the unedited original sentence.
And we calculate the gold reference scores by evaluating each reference against all others in a leave-one-out scenario and then averaging the scores.
To our surprise, direct back translation gets the best SARI score among the unsupervised methods.
But regarding HSK level, the performance of direct back translation is not good, even worse than the source.
We find that many rewrite operations were generated during the back translation process, which is highly correlated with the SARI score.
But due to the lack of control over simplicity, direct back translation is more like a sentence paraphrase method than text simplification.
This may be why it performs poorly on the HSK level.
The translated Wiki-Large method gets the best BLEU score but the lowest SARI score among all methods.
In fact, the system output has hardly changed compared to the original sentence.
As the unedited source gets the highest BLEU score of 84.75, we believe the single BLEU value cannot be used as an excellent indicator of text simplification.
Because there is a significant overlap between the original sentence and the references.
As for the poor performance of translated Wiki-Large method, we believe it is due to the large amount of noise contained in the translated training data.
The SARI score of the cross-lingual pseudo data method is 38.49, which is between the other two unsupervised methods.
But it performs better on the HSK level than the other two.
This may be because the model learned simplification knowledge from pseudo data that was transferred from the English text simplification model.
In terms of the large language models, the gpt-3.5-turbo significantly performs better than text-davinci-003 and it achieves the best scores on SARI and HSK levels.
However, compared to the gold reference, the performance of gpt-3.5-turbo is still insufficient.
§.§ Results of Human Evaluations
We conducted human evaluations on three representative methods, namely direct back translation, cross-lingual pseudo data, and gpt-3.5-turbo.
We recruited three volunteers to conduct the evaluation.
All of them have a background in linguistics.
We selected 30 sentences from the test set of MCTS for each volunteer and provided them with the original sentences and the outputs of these methods.
For the convenience of comparison, a randomly selected reference for each sentence was additionally provided.
Volunteers were asked to rate the simplification of these four groups.
The results of the human evaluation are shown in Table <ref>.
We can see that the gold reference gets the best average score and rank.
It is significantly superior to the output results of other simplification systems.
For detail, it gets the best simplicity score of 4.20 and the best fluency score of 4.68.
Due to some degree of sentence compression, it does not achieve the best adequacy score but only 4.31.
As for the direct back translation method, despite its excellent performance in adequacy, it achieves the lowest simplicity score due to the lack of corresponding control measures.
On the contrary, the cross-lingual pseudo data method performs well in terms of simplicity but does not perform well in terms of adequacy.
Because it tends to perform more sentence compression, which removes lots of semantic information.
These two unsupervised methods get a similar average score and rank score.
The gpt-3.5-turbo gets the second-best results among all metrics.
By analyzing the average score and the rank score, we can find that it is significantly better than the two unsupervised simplification methods.
But compared to the gold reference, there is still a certain gap.
Our experiment has shown that under the zero-shot setting, there is still room for further improvement in the large language model's Chinese text simplification ability.
§ CONCLUSION
In this paper, we introduced the MCTS, a human-annotated dataset for the validation and evaluation of Chinese text simplification systems.
It is a multi-reference dataset that contains multiple rewriting transformations.
By calculating the low-level features for simplifications, we have shown the rich simplifications in MCTS, which may be of great significance for understanding the simplification and readability of Chinese text from a linguistic perspective.
Furthermore, we tested the Chinese text simplification ability of some unsupervised methods and advanced large language models using the proposed dataset.
We found that even advanced large language models are still inferior to human simplification under the zero-shot setting.
Finally, we hope our work can motivate the development of Chinese text simplification systems and provide references for future research.
§ ACKNOWLEDGMENTS
This work was supported by the Fundamental Research Funds for the Central Universities, and the Research Funds of Beijing Language and Culture University (No. 23YCX131).
acl_natbib
|
http://arxiv.org/abs/2306.10818v1
|
20230619100947
|
Advancements of $γ$-ray spectroscopy of isotopically identified fission fragments with AGATA and VAMOS++
|
[
"A. Lemasson",
"J. Dudouet",
"M. Rejmund",
"J. Ljungvall",
"A. Görgen",
"W. Korten"
] |
nucl-ex
|
[
"nucl-ex"
] |
e1e-mail: [email protected]
GANIL, CEA/DRF-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5, France
Université de Lyon 1, CNRS/IN2P3, UMR5822, IP2I, F-69622 Villeurbanne Cedex, France
IJCLab, Université Paris-Saclay, CNRS/IN2P3, F-91405 Orsay, France
Department of Physics, University of Oslo, PO Box 1048 Blindern, N-0316 Oslo, Norway
IRFU, CEA, Université Paris-Saclay, 91191, Gif-sur-Yvette, France
"A. Lemasson et al"
"Advancements of γ-ray spectroscopy
of isotopically identified fission fragments with AGATA and VAMOS++"
Advancements of γ-ray spectroscopy
of isotopically identified fission fragments with AGATA and VAMOS++
A. Lemassonaddr1, e1
J. Dudouetaddr2
M. Rejmundaddr1
J. Ljungvalladdr3
A. Görgenaddr4
W. Kortenaddr5
July 31, 2023
=============================================================================================================
γ-ray spectroscopy of fission fragments is a powerful method for studies of nuclear structure
properties. Recent results on the spectroscopy of fission fragments,
using the combination of the AGATA γ-ray tracking array and the VAMOS++
large acceptance magnetic spectrometer at GANIL, are reported.
A comparison of the performance of the large germanium detector arrays
EXOGAM and AGATA illustrates the advances in γ-ray spectroscopy of fission fragments.
Selected results are highlighted for prompt γ-ray spectroscopy studies,
measurements of short lifetimes of excited states with the Recoil Distance Doppler-Shift method,
using both AGATA and VAMOS++ and prompt-delayed γ-ray spectroscopy studies using AGATA, VAMOS++ and EXOGAM.
§ INTRODUCTION
Nuclear fission is one of the most effective ways of producing and studying
neutron-rich exotic isotopes. Fission fragments cover a wide range of the nuclear
chart and exhibit a variety of
phenomena ranging from single-particle excitations, near shell closures, to collective
excitations related to nuclear vibrations
or deformations. The γ-ray spectroscopy of fission fragments can be used to probe
the evolution of nuclear structure properties
as a function of excitation energy, angular momentum and neutron-proton
asymmetry <cit.>.
The prompt γ-rays emitted by the secondary fission fragments, as they de-excite to their
ground states, provide detailed insight into
the structure of nuclei at large spin and isospin. The prompt γ-rays are emitted on a very
short time scale (less than few nanoseconds), after scission, although sometimes isomers can delay the decay
process <cit.>. The study of prompt γ rays faces the challenge of identifying a
particular γ-ray transition among all γ rays emitted by few hundred of fission
fragments produced in a single experiment.
The use of known characteristic γ rays, in the fragment of interest or in the complementary
partner fragment, has been proven to be a
powerful tool for characterisation of fission fragments <cit.>.
Experiments making use of high-fold γ-ray coincidence techniques, involving the
Gammasphere <cit.>, EUROGAM 2 <cit.> and EUROBALL <cit.> arrays,
to study fission fragments produced
in either spontaneous-fission process or in in-beam heavy-ion
induced fission reactions using stable beams, were used to cover a broad range of topics in nuclear
structure <cit.>.
More recently, a similar approach was used in conjunction with fission induced by cold and fast neutrons <cit.>.
The necessity of knowledge of the characteristic γ rays could be overcome by employing the isotopic identification techniques of fission
fragments using large acceptance magnetic spectrometers such as VAMOS++ <cit.> and PRISMA <cit.>.
The use of fission induced by reaction in inverse kinematics in conjunction with these large acceptance spectrometers resulted in an higher detection efficiency. This combination has opened new opportunities to study prompt and delayed γ rays emitted by fission fragments <cit.>.
The use of the VAMOS++ spectrometer with the EXOGAM <cit.> large γ-ray array allowed the
first assignment of prompt
γ rays in several members of the isotopic chains of Ag <cit.>, Rh <cit.>,
Cd and In <cit.>. The combination of prompt γ-ray data-sets
obtained in coincidence with the VAMOS++ magnetic spectrometers
with those obtained using high fold γ-ray techniques using Gammasphere allowed extended
studies in the isotopic chains of Y <cit.>, Pr <cit.>
and Pm <cit.>, which illustrate the complementarity of both
methods. Furthermore, the experiments with VAMOS++
and EXOGAM using the Recoil Distance Doppler-Shift Method (RDDS) <cit.>
allowed the measurement of lifetimes of excited states in isotopes of Zr <cit.>,
Y and Nb <cit.> and Tc and Rh <cit.>.
The advent of the new generation of γ-ray tracking arrays AGATA <cit.>
and GRETINA <cit.> allows an improved
determination of the spatial position of the first interaction point of each γ-ray
in the detector and an increase of the operating counting rate
with larger γ-ray multiplicities. Thus the increased effective granularity results
in an improved Doppler correction capability of the energy of the
γ-rays emitted by nuclei in flight, provided that the velocity vector
v of the recoiling fragment is measured on an event-by-event basis
with sufficient precision. To ensure that the final Doppler-corrected γ-ray energy
resolution only arises from the γ-ray tracking capabilities,
resolution in the scattering angle of the fragment better than
1^∘ and resolution in the interaction point at the target than
1 mm, are required.
Furthermore, the continuous angular coverage of γ-ray tracking arrays provides new opportunities for lifetime measurement in the picosecond range based on
Doppler-Shift method <cit.>.
This paper presents recent results on the spectroscopy of fission fragments
using the AGATA γ-ray tracking array combined with the VAMOS++
large acceptance spectrometer at GANIL <cit.>.
The presented data arise from four experiments (whose main characteristics
are described in Table <ref>) that can be summarised as follows:
- Exp. 1: Prompt-delayed γ-ray spectroscopy of
^122-131Sb <cit.>,
^119-121In <cit.>, ^130-134I <cit.> and experimental methods <cit.>,
- Exp. 2: Prompt γ-ray spectroscopy of ^96Kr <cit.> and ^81Ga <cit.>,
^83,85,87As <cit.>,
- Exp. 3: Lifetime measurements in ^84Ge, ^88Kr,
^86Se <cit.>,
- Exp. 4: Lifetime measurements in neutron-rich Zr, Mo and Ru <cit.>.
The performance of VAMOS++ for the isotopic identification of fission fragments produced in inverse kinematics is presented in Sec. <ref>.
The Doppler correction of the γ-ray energy is discussed in Sec. <ref>.
A comparison of performances for the spectroscopy of fission fragments
between EXOGAM and AGATA is used to illustrate advances in fission fragment spectroscopy.
Recent results for prompt γ-ray spectroscopy are highlighted in Sec. <ref> and for the measurement of lifetimes of excited states in Sec. <ref>. Finally, Sec. <ref> presents results from prompt-delayed γ-ray spectroscopy with combinations of AGATA and EXOGAM with VAMOS++.
§ ISOTOPIC IDENTIFICATION OF FISSION FRAGMENTS
The fission fragments were typically produced in fusion and transfer induced fission
by a ^238U beam at the energy of 6.2 MeV/A on
a ^9Be target of typical thickness ranging from of 1.6 μ m to 10 μ m.
A typical beam intensity of ∼ 1 pnA was used.
Fission fragments were isotopically identified in terms of atomic number Z, mass number A
and atomic charge q, in the VAMOS++ spectrometer, placed at angles between 20 and 28
degrees depending on the fragments of interest. One of the two emitted fragments is
detected and isotopically identified in the VAMOS++.
The VAMOS++ focal plane detection system consisted of a multi-wire proportional counter (MWPC)
(stop of the time-of-flight of the ion), two drift chambers (horizontal and vertical
tracking of the fragment trajectory, X, θ, Y, ϕ) and an segmented ionisation chamber
(energy loss and energy of the ion, Δ E, E).
The ionisation chamber was filled with CF_4 gas at pressures between 70-100 mbar, depending on the ions of interest.
A dual position sensitive MWPC <cit.> (DPS-MWPC) (start of the time-of-flight,
horizontal and vertical tracking of the fragment trajectory, θ_target, ϕ_target) was placed at the entrance of the spectrometer. The MWPCs and drift chambers were filled
with isobutane gas at a pressure of 6 mbar. The fission fragments were
implanted in the gas inside the ionisation chamber. The atomic number Z of the ions was obtained based on Δ E-E correlation technique.
The mass number A was obtained from the reconstructed magnetic rigidity, flight path and the measured time-of-flight. Details
on the identification techniques and performances can be found in Ref. <cit.> while details on the acceptance of the spectrometer for fission reactions are described in Ref. <cit.>. Typical fission fragment rates in the VAMOS++ focal plane were ranging between 5 and 10kHz and were limited by the pileup in the drift chambers and ionization chambers.
In Fig. <ref> the typical identification spectra obtained for fission fragments using VAMOS++
are shown. The two-dimensional correlations are shown in panel (a) energy loss versus total energy (Δ E vs. E) and (b) atomic charge versus mass-over-charge (q vs. M/q).
The corresponding one-dimensional spectra are shown in panel (c) atomic number (Z) and (d)
atomic mass (A). The data is taken from the
Exp. 1, see Table <ref>.
The velocity vector v
of the fragment was measured using the DPS-MWPC detector as described in Ref. <cit.>. Figure <ref>(a)
shows the correlation between
the angle of the fragment in the laboratory system (θ_L) detected in VAMOS++
and its velocity v for fission fragments with the atomic number Z=40, 50 and 60. The data is taken from the Exp. 1, see Table <ref>.
The typical portion of the kinematics of fission fragments measured can be seen in the figure.
The much stronger kinematic focusing of the heavier fragments with respect to the lighter ones
can be seen. Figure <ref>(b) shows
the correlation of the angle α, between the γ-ray emission vector
v_γ versus the velocity vector v of the detected fragment
(see also Sec. <ref>), and the velocity v.
In Fig. <ref> one can observe the angular opening of VAMOS++ (panel (a)) and AGATA (panel (b)).
Also, from the range of the velocity v of 2.9 - 4 cm/ns and the mean flight path in VAMOS
D=760 cm, one can infer the typical time-of-flight from the target to the focal plane of
190 - 260 ns.
§ DOPPLER CORRECTION OF Γ RAY ENERGY
The prompt γ rays (γ_P), emitted near the target position were detected
by the AGATA <cit.> γ-ray tracking array and acquired in coincidence with fragment detected in VAMOS.
The array was placed at a distances from the target ranging from 13.5 cm to 23.5 cm
depending on the configuration used, see Ref. <cit.> for details of the different configurations.
The detection efficiency of the AGATA array in the different configurations is discussed in Refs. <cit.>.
Typical counting rates in the AGATA detectors was ∼ 20 kHz per crystal.
The AGATA array holding structure and the VAMOS++ spectrometer were supported on a common platform
which could rotate around a vertical axis perpendicular to the beam axis at the
target position.
The AGATA detectors typically covered angles from ∼ 100^∘ to ∼ 170^∘, relative to the axis of the VAMOS++ spectrometer.
The γ-ray emission vector v_γ was determined using the first
three-dimensional interaction point of the γ-ray in the AGATA array, obtained from pulse shape analysis and tracking
procedures <cit.>. A typical position resolution of
∼ 5 mm (FWHM) <cit.> has been
reported for γ-ray energies around 1.3 MeV. The measured velocity vector of the detected fragment
v and γ-ray emission vector v_γ were
used to derive the Doppler-corrected γ-ray energy on an event-by-event basis.
The Doppler correction was obtained using the following relationship
E_RF = E_LAB· (1-β· cos(α)) ·γ
where: E_RF and E_LAB are respectively the energies of the
γ ray in the rest frame of the nucleus and in the laboratory
system, β = v/c, γ = 1/√(1 - β^2) and
α is the angle between vector v and
v_γ. It should be noted that
γ rays emitted by the complementary fragment will have an
incorrect doppler correction applied, resulting in Doppler broadened
and shifted peaks that contribute to the background of the
spectra. It was demonstrated in Refs. <cit.>
that Doppler correction for the binary partner can be achieved using
two-body kinematics derived from the velocity vector of the measured
fragment.
§ PROMPT-Γ-RAY SPECTROSCOPY
§.§ AGATA versus EXOGAM
To illustrate the performances obtained with AGATA for the prompt-γ-ray
spectroscopy of fission fragments, the Fig. <ref> shows
Doppler-corrected γ-ray spectra in coincidence with isotopically identified
^98Zr in VAMOS++. Figure <ref>(a) shows the prompt
γ rays measured with the EXOGAM array <cit.> consisting of
11 Compton-suppressed segmented clover HPGe detectors (15 cm away from the target),
in coincidence with the isotopically identified fragments.
The v of the fragment
along with the position of the center of the electrical segment of the clover detector that had registered the
highest energy deposit were used to obtain the γ-ray energy
in the rest frame of the emitting fragment <cit.>.
Figure <ref>(b) shows the prompt γ rays measured with the AGATA array
in coincidence with the isotopically identified fragments. The data is taken from Exp. 1, see Table <ref>.
The comparison of the spectra clearly illustrates the improved γ-ray energy resolution arising
from the position resolution of the first interaction point derived in AGATA.
The 1222.9 keV γ ray, 2^+ → 0^+ transition in ^98Zr,
can be used to evaluate the obtained energy resolution of Doppler-corrected spectra.
Considering all clovers from EXOGAM (including 90^∘ and
135^∘ rings), a resolution of 15 keV was obtained. Considering only backward angles in EXOGAM, a resolution of 7 keV was measured. This is to be
compared with a resolution of 5 keV obtained with AGATA for angular coverage ranging from 100^∘ to 170^∘. The improved resolving power can be further seen in the insets of Fig. <ref>, where within an expanded region of the γ-ray
spectra several weak transitions, unresolved using the EXOGAM array, could be resolved
using the AGATA array.
§.§ γ-ray spectroscopy of ^96Kr
The sudden appearance of the onset of the collectivity at N=60 has been
one of the early successes of the γ-ray spectroscopy of fission fragments.
After decades of studies establishing the sudden transition towards the deformation
at N=60 in Zr (Z=40) and Sr (Z=38), the detailed description of this island
of deformation still challenges theoretical models. In Ref. <cit.>,
the prompt γ-ray spectroscopy of the neutron-rich ^96Kr (Z=38 and N=60)
has contributed to delineate the limits of this island of deformation.
The nucleus of interest, ^96Kr, was produced in transfer- and fusion-induced
fission processes, using the ^238U beam impinging on a ^9Be target.
This experiment is referred to as Exp. 2,
see Table <ref>.
The prompt Doppler-corrected γ-ray spectrum measured in coincidence with
^96Kr isotopically identified in VAMOS++ is shown in Fig <ref>(a).
Three γ-ray transitions at the energies of 554(1) keV, 621(2) keV,
and 515(2) keV can be seen in the spectrum. The 554 keV transition confirms the
excitation energy of the first 2^+ state of
Ref. <cit.>.
The 621 keV transition was observed in coincidence with the 554 keV
transition as can be seen in the inset in Fig.<ref>(a).
It was interpreted as the transition depopulating the first 4^+ excited state
at the energy of 1175(3) keV.
Because of the limited statistics, it was not possible to obtain a significant
coincidence analysis for the 515 keV transition which was not placed in
the level scheme. The presence of low-lying 2^+_2
excited states in the Kr isotopic chain suggests
the possible assignment of the 515 keV γ ray to the
2^+_2 → 2^+_1 transition.
Recently, the spectroscopy of ^96Kr from knock-out and inelastic
reactions was reported <cit.>. A 888 (16) keV state,
decaying by 887^+24_ -23 keV and 334 (16) keV γ-ray
transitions, was tentatively assigned to the 2^+_2 state based on
coincidence arguments. These γ-ray transitions were not
observed in Ref. <cit.> due to the available
statistics. The 515 (2) keV transition was also observed and
reported in coincidence with the 2^+->0^+ transition, but it was
not placed either in the level scheme. The nature of the state
depopulated by the 515 keV transition remains an open question to be
addressed.
To understand, quantify, and characterise the evolution
of the nuclear structure along isotopic chains, a systematic study
of the energy ratio R_4/2=E(4^+)/E(2^+) is often used <cit.>.
The newly measured energy of the 4^+ state results in the
energy ratio of R_4/2=2.12(1).
In Fig.<ref>(b) the R_4/2 ratio is shown as a function of
atomic number Z for isotonic chains with N=58, 60 and 62.
It is seen that at the N=58 the nuclei between Kr and Pd exhibit very
little collectivity and are situated between the transitional and
spherical vibrator regime. For N=60 and 62, the collectivity increases
with decreasing Z, reaching the maximum in Sr. In ^96Kr one observes
an abrupt decrease of the collectivity. The R_4/2 obtained for
^98Kr <cit.> follows the same behaviour.
This new measurement highlights an abrupt transition of the degree of
collectivity as a function of the proton number at Z=36.
A possible reason for this abrupt transition could be related to the
insufficiently large amplitude of the proton excitation in
the g_9/2, d_5/2, and s_1/2 orbitals to generate strong
quadrupole correlations or coexistence of competing different shapes.
This measurement established the Kr isotopic chain as the low-Z
boundary of the island of deformation for N=60 isotones.
The comparison with available theoretical predictions using different
beyond mean-field approaches shows that these models fail to reproduce the
abrupt transitions at N=60 and Z=36 and that the precise
description of the region remains challenging. See Ref. <cit.>
for further details.
§.§ γ-ray spectroscopy of ^81Ga
It is agreed that ^78Ni (Z=28, A=50) manifests a doubly magic character.
However, in Ref. <cit.> the sudden emergence of collective states and their
coexistence with the spherical states are predicted. These spherical states
arise mainly from one particle-hole excitations across the magic shell gaps
(Z = 28 and N = 50). Collective states arise from multi particle-hole
excitations, giving rise to a deformed collective band,
providing a striking example of shape coexistence.
The excited states of N = 50 isotones provide complementary insight into the
coupling of single particle-hole configurations with valence protons where the
particle-hole configuration are intimately related to the properties of the N = 50
shell gap.
The high-spin states of the neutron-rich ^81Ga, with three valence protons
outside a ^78Ni core, were measured for the first time <cit.>.
This experiment is referred to as the Exp. 2,
see Table <ref>.
The tracked Doppler-corrected γ-ray spectrum obtained in coincidence with
^81Ga is shown in Fig. <ref>(a).
The inset shows the γ-γ coincidence spectrum gated on the
813.6 keV transition. The derived level scheme is shown in Fig. <ref>(b).
The newly observed high-spin states in ^81Ga are interpreted using the results
of state of the art Large Scale Shell Model (LSSM) calculations <cit.>
using the PFSDG-U interaction, see Fig. <ref>(b).
The lower excitation energy levels are understood as resulting from the recoupling
of three valence protons to the closed doubly magic core, while the highest
excitation energy levels correspond to excitations of the magic N = 50 neutron core.
These results support the doubly magic character of ^78Ni and the persistence of the
N = 50 shell closure, but also highlight the presence of strong proton-neutron
correlations associated with the promotion of neutrons across the magic N = 50
shell gap, only few nucleons away from ^78Ni. See Ref. <cit.>
for further details.
§ LIFETIME MEASUREMENT USING RDDS METHOD
§.§ RDDS method
The Recoil Distance Doppler-Shift method is a well established technique for the
determination of picosecond lifetimes of excited nuclear states.
Traditionally, a nucleus produced in a nuclear reaction in a thin target
leaves the target with a velocity v_t and is stopped after flying trough a well-defined
distance in a stopper foil. The excited state in the nucleus can de-excite by an emission of
a γ ray either in-flight or at rest in the stopper foil. One can observe,
the intensity of either Doppler-shifted (S) or unshifted (U) components of
the γ-ray transition, respectively. The lifetime of the corresponding state
can be measured using the so called decay curve or flight curve based on the
intensities of the Doppler-shifted and unshifted components.
An alternative procedure called the Differential Decay Curve Method (DDCM) is also used.
See further details in Ref. <cit.>.
For the experiments, where the recoiling nuclei of interest are to be detected,
as it is the case when using VAMOS++ spectrometer,
the stopper foil can be replaced by a degrader foil. One can observe
either the γ rays emitted after the target at the recoil velocity
v_t or after the degrader foil at v_d. Typically, the Doppler correction
uses the velocity of the nucleus measured after the degrader v_d, therefore
the γ rays emitted after the degrader are seen as unshifted and those emitted
before the degrader as Doppler-shifted.
In Fig. <ref>(a), the γ-ray energy measured in the laboratory frame is shown as function of the angle α, between the vectors v_d and v_γ,
in coincidence with isotopically identified ^98Zr in VAMOS++ for the target-to-degrader distance of 470 μm in the Exp. 4,
see Table <ref>.
In Fig. <ref>(b) the same events are shown, but a Doppler correction on an
event-by-event basis was applied using the measured velocity vector after the
degrader v_d. The well-defined in energy, unshifted (U) component
can be seen in the figure. The Doppler-shifted (S) component becomes dependent on α and appears at a lower energy because the γ ray was emitted at a larger velocity, v_t > v_d, and the AGATA array was placed at backward angles, α>90^∘.
The use of the AGATA array for lifetime measurements using the RDDS method has several
important assets namely (i) a very good energy resolution for Doppler-corrected
γ-ray transitions, (ii) a good coverage for the very backward solid angle where the Doppler effect is largest (iii) the availability of a continuous measurement of the angle α, see Fig. <ref>.
§.§ AGATA versus EXOGAM
Doppler-corrected γ-ray spectra of several transitions in ^98Zr, isotopically identified in VAMOS++, are shown in Fig. <ref>. Panels (a) and (b) show spectra obtained with AGATA <cit.> from Exp. 4, see Table <ref>. Panels (c) and (d) show corresponding spectra with comparable target-to-degrader distances obtained with EXOGAM <cit.>.
Panels (a) and (c) show the components of the 1223 keV
transition de-exciting the 2^+ state (τ = 3.8 ± 0.8 ps) and panels (b) and (d) the 620 keV and
647 keV transitions de-exciting the 4^+ (τ = 7.5 ± 1.5 ps) and 6^+ (τ = 2.6 ± 0.9 ps) states.
The evolution of the intensity of the Doppler-shifted (S) component relative to the unshifted (U) component, for the transitions
de-exciting the states with different lifetimes, is evident as a function of the
target-to-degrader distance. The improved energy separation between the two components,
due to the improved first interaction position of the γ ray and thus its energy
resolution can be also seen in the figure. This results in an improved the precision
of the lifetime analysis and extracted reduced transition strength.
§.§ Lifetime measurement of excited states in ^84Ge
The recent intense experimental and theoretical efforts on the investigation of the nuclear
structure in the vicinity of doubly magic ^78Ni (Z=28, N=50), have triggered
experimental measurements of lifetimes of excited states.
In Ref. <cit.>, lifetime measurements of excited states of the light N=52
isotones ^88Kr (Z=36),
^86Se (Z=34), and ^84Ge (Z=32) using the RDDS method with VAMOS++ and AGATA were
reported. The reduced electric quadrupole transition probabilities B(E2,2^+→ 0^+)
and B(E2,4^+→ 2^+) were obtained for the first time for the hard-to-reach ^84Ge.
The nuclei of interest were produced in transfer- and fusion-induced
fission processes, using the ^238U beam impinging on a ^9Be target followed by a Mg
degrader. This experiment is referred to as the Exp. 3,
see Table <ref>.
Because of low statistics, the RDDS-analysis variant developed in
Ref. <cit.> had
to be applied, which consists in summing the statistics obtained over all
distances and determining the lifetime, see Ref. <cit.> for further details.
The obtained B(E2) values are placed in the systematics
of light N=52 isotones in Fig. <ref>,
where a comparison with several calculations is also provided.
Shell-model results from Ref. <cit.> (open circles), assuming an inert
^78Ni core, are in excellent agreement with the experimental values (closed circles)
obtained for ^88Kr and ^86Se. Interestingly enough, both shell-model and
experimental values exhaust the limit for pure pseudo-SU(3) symmetry (down triangles)
for these two isotones. This clearly means that the quadrupole coherence offered by this
subspace is maximally expressed in these two nuclei. In contrast, both shell-model and
pseudo-SU(3) values barely reach the lower tip of the experimental error bar for ^84Ge.
The more in-depth analysis of the experimental data in Ref. <cit.> suggests,
for the first time, a shape transition from Z=34 (soft triaxial) to Z=32 (prolate deformed),
a result all the more unexpected as the shell model predicts a "fifth island of inversion"
only for much lighter (Z < 28) systems <cit.>.
§ PROMPT-DELAYED Γ-RAY SPECTROSCOPY
§.§ Experimental method
A new experimental setup to measure prompt-delayed γ-ray coincidences from
isotopically identified fission fragments, over a wide time range, 100 ns - 200 μs,
is presented in Ref. <cit.>. The fission fragments are isotopically identified,
on an event-by-event basis, using the VAMOS++ large acceptance spectrometer.
The prompt γ rays (γ_P) emitted near the target were detected using
the AGATA γ-ray tracking array .
The fission fragments reaching the focal plane after a typical time-of-flight
of ∼ 200 ns, were stopped in the ionisation
chamber. Delayed γ rays (γ_D) were detected using seven EXOGAM
HPGe Clover detectors <cit.> arranged in a wall-like
configuration at the focal plane of the VAMOS++
spectrometer. A 2 mm thick aluminium window between the ionisation chamber and the Clover
detectors was used to minimise the attenuation of the
emitted γ rays. A 3 mm thick lead shielding was placed
after and in-between the Clover detectors to minimise the
events arising from the room-background and Compton
scattering between the Clover detectors.
The details of the experimental setup and analysis methods are discussed in Ref. <cit.>.
The results obtained using this experimental set-up will be illustrated based on the
case of well studied ^132Te <cit.>.
Figure <ref>(a) shows the partial level scheme of ^132Te
(below 4.3 MeV). Several different isomeric states have been reported for this nucleus, in particular, the 7^- excited state at 1925 keV
with the half-life of t_1/2 = 28.1 μs <cit.>.
In earlier works the prompt transitions 1040 keV and 292 keV have been observed and placed tentatively in the level
scheme as feeding the 7^- state <cit.>.
However, the prompt-delayed correlation between the γ rays
populating and depopulating the 7^- state could not be observed.
In Fig. <ref>(b) the Doppler corrected prompt γ ray (γ_P) spectrum
observed in coincidence with delayed E_γ_D = 151 keV γ-ray, which depopulates
the 7^- isomeric state, is shown. The prompt γ rays 292 keV, 624 keV,
900 keV and 1040 keV, are seen in the spectrum. Figure <ref>(c)
shows the delayed γ-ray spectrum in coincidence with the prompt E_γ_P at 1040 keV.
The delayed γ rays 103 keV, 151 keV,
697 keV and 974 keV, are seen in the spectrum.
This measurement confirms experimentally proposed the level scheme, and illustrates the capabilities of the VAMOS-AGATA-EXOGAM setup to properly correlate prompt and delayed γ rays across a long-lived isomer. Further details are discussed in Ref. <cit.>.
§.§ Prompt-delayed γ-ray spectroscopy of neutron-rich isotopes of Sb
The Z=50 shell closure, near N=82, is unique in the sense that it is the only
shell closure with the spin-orbit partner high-spin orbitals, πg_9/2 and πg_7/2,
enclosing the magic gap. The interaction of the proton hole/particle in the above-mentioned
orbitals with neutrons in the high-spin νh_11/2 orbital is an important prerequisite
to the understanding of the nuclear structure near N=82 and establishing the features of
the νπ interaction.
Prompt-delayed γ ray of high-spin states in neutron-rich
^122-131Sb (Z=51) <cit.>,
^130-134I (Z=53) <cit.> and ^119,121In (Z=49) <cit.>,
using the unique experimental setup combining AGATA, VAMOS++ and EXOGAM were reported. This experiment is
referred to as the Exp. 1, see Table <ref>.
In this section we will restrict ourselves to the isotopes of Sb <cit.>.
Figure <ref> shows the level schemes of ^125–128Sb.
The newly observed γ-ray transitions above and below the isomer are indicated in red and blue,
respectively. Previously known half-lives have been remeasured and are underlined
by a red line, whereas the newly measured half-lives are marked with a red box.
The wealth of the new experimental data obtained in Ref. <cit.> can be clearly seen from the figure.
The experimental data was compared with theoretical results obtained from LSSM.
A consistent agreement with the excitation energies
and the B(E2) transition probabilities in neutron-rich Sn and Sb isotopes was obtained.
The isomeric configurations in Sn and Sb were found to be relatively pure.
The LSSM calculations revealed that the presence of a single valence proton,
mainly in the πg_7/2 orbital in Sb isotopes, leads to significant mixing,
due to the νπ interaction, of
(i) the neutron seniorities (υ_ν)[υ_ν stands for neutron seniority,
which refers to the number of unpaired neutrons] and (ii) the neutron angular momentum (I_ν).
The above features have a weak impact on the excitation energies, but have an important impact
on the nuclear wave function of the excited states and thus on the corresponding B(E2)
transition probabilities.
In addition, a striking feature of the constancy of the energy differences,
where the increase in the number of broken neutron pairs is involved, was observed.
A plot of such energy differences in ^119–130Sn
and ^122–131Sb isotopes is shown in Fig. <ref>.
This figure shows that the average energy for the
breaking of the first and second pair of neutrons is ∼ 1.1 MeV,
and that this energy is constant (with a deviation of ∼100 keV) for a wide
range of mass numbers, irrespective of the excitation energy
and mixing of neutron seniorities (υ_ν) in the case of Sn and
Sb. In addition, it follows the behavior of even-A Sn isotopes
for E(2^+ → 0^+). Further details are discussed in Ref. <cit.>.
§ SUMMARY AND CONCLUSIONS
Among a large variety of experiments performed at GANIL using the AGATA γ-ray array,
four have focussed on the nuclear structure studies of isotopically identified fission
fragments employing the VAMOS++ magnetic spectrometer in coincidence.
The combination of the AGATA γ-ray array with the VAMOS++ spectrometer forms a unique,
highly performant experimental setup combining efficiency
with counting rate capabilities, as well as selectivity with excellent Doppler
correction of γ-ray energy and precise isotopic identification.
The performed experiments have been very fruitful and numerous pertinent results have been
obtained including the γ-ray spectroscopy of ^96Kr <cit.>, ^81Ga <cit.>, ^83,85,87As <cit.>, lifetime
measurements of excited states using the RDDS method in ^84Ge, ^88Kr,
^86Se <cit.>, neutron-rich Zr, Mo and
Ru <cit.> and prompt-delayed γ-ray
spectroscopy, using the EXOGAM array for delayed γ rays,
of ^122-131Sb <cit.>,
^119-121In <cit.> and ^130-134I <cit.>.
In the future, fission fragments spectroscopy program will be pursued
at LNL using combination of AGATA and the PRISMA <cit.>
spectrometer. The ongoing development of ^238U beams at energies
around the Coulomb barrier will extend measurements using inverse
kinematics reactions in addition to presently available ^208Pb
beams. Further, the increased number of available AGATA crystals will
allow to cover 2π solid angle at the nominal detector distance,
effectively doubling the solid angle coverage compared to the
experiments presented in this work. This will improve the
γ-γ coincidence efficiency, allowing to expand
investigations of exotic neutron-rich nuclei by fission-fragment
spectroscopy.
§ ACKNOWLEDGMENTS
The authors thank the AGATA collaboration, the e661, e680, e669 and
e706 GANIL experimental collaborations and the technical teams at Grand
Accélérateur National d’Ions Lourds for their support during the
experiments. A.G. has received funding from the Norwegian Research
Council, project 325714.
myapsrev4-1
|
http://arxiv.org/abs/2306.02651v1
|
20230605073441
|
Dynamic Interactive Relation Capturing via Scene Graph Learning for Robotic Surgical Report Generation
|
[
"Hongqiu Wang",
"Yueming Jin",
"Lei Zhu"
] |
cs.CV
|
[
"cs.CV",
"cs.LG"
] |
Fully-Dynamic All-Pairs Shortest Paths: Likely Optimal Worst-Case Update Time
Xiao Mao
Stanford University
=============================================================================
empty
empty
For robot-assisted surgery, an accurate surgical report reflects clinical operations during surgery and helps document entry tasks, post-operative analysis and follow-up treatment.
It is a challenging task due to many complex and diverse interactions between instruments and tissues in the surgical scene.
Although existing surgical report generation methods based on deep learning have achieved large success, they often ignore the interactive relation between tissues and instrumental tools, thereby degrading the report generation performance.
This paper presents a neural network to boost surgical report generation by explicitly exploring the interactive relation between tissues and surgical instruments.
To do so, we first devise a relational exploration (RE) module to model the interactive relation via graph learning, and an interaction perception (IP) module to assist the graph learning in RE module.
In our IP module, we first devise a node tracking system to identify and append missing graph nodes of the current video frame for constructing graphs at RE module.
Moreover, the IP module generates a global attention model to indicate the existence of the interactive relation on the whole scene of the current video frame to eliminate the graph learning at the current video frame.
Furthermore, our IP module predicts a local attention model to more accurately identify the interaction relation of each graph node for assisting the graph updating at the RE module.
After that, we concatenate features of all graph nodes of RE module and pass concatenated features into a transformer for generating the output surgical report.
We validate the effectiveness of our method on a widely-used robotic surgery benchmark dataset, and experimental results show that our network can significantly outperform existing state-of-the-art surgical report generation methods (e.g., 7.48% and 5.43% higher for BLEU-1 and ROUGE).
§ INTRODUCTION
Robot-Assisted Minimally Invasive Surgery (RAMIS) has shown increasingly essential in recent decades given its several advantages, such as high stability, superhuman dexterity and intelligence <cit.> <cit.>. RAMIS can bring great benefits to patients with reduced recovery time and trauma after surgery <cit.>. Conventionally, surgeons need to generate a corresponding surgical report to record the surgical procedure performed by the surgical robots. It can provide a detailed reference for post-operative analysis of the surgical interventions <cit.>. However, this task is generally time-consuming and labor-intensive. In this regard, automatic surgical report generation is highly demanded to reduce the burden of surgeons from low-level documentation task, allowing them to pay more attention to post-operative analysis on patients <cit.>.
Surgical report generation can also be seen as image caption generation<cit.>, a composite task involving Computer Vision (CV) and Natural Language Processing (NLP) <cit.>.
Image caption task transforms visual features extracted by the Convolutional Neural Networks (CNNs) into high-level semantic information. It is a complicated problem since it includes the detection of objects in images, understanding the inter-relationships between main objects, and finally expressing them in reasonable language. In the medical field, most research on diagnostic report generation has focused on medical images rather than surgical videos, such as radiology and pathology images <cit.> <cit.>. However, with the development of RAMIS, the generation of surgical reports has received more and more attention, and there are a few latest papers in this field <cit.> <cit.>. Compared with diagnostic report generation, surgical report generation not only needs to describe the surgical instruments that appear in the surgical scene but also needs to pay attention to the interaction between instruments and tissues. Therefore, it requires a deeper understanding of the relationship between objects.
Earlier methods for tackling image caption in the medical domain utilize CNN and long-short term memory (LSTM) network, to take advantage of high-level spatial temporal feature extraction <cit.>. However, they suffer from limited representational abilities and generally encounter optimization difficulties. Recently, Transformer <cit.> has made great successes in caption generation tasks of natural images <cit.> <cit.>, given its discriminative representation capability with self attention mechanism. Considering the excellent performance, it is also adopted as the main captioning architecture in surgical report generation <cit.> <cit.>.
Most current works focus on the problem of domain adaptation <cit.> <cit.>, mainly for considering that there are new instruments and variations in surgical tissues appearing in robotic surgery. For example, Xu et al. <cit.> propose the gradient reversal adversarial learning scheme, the gradient multiplies with a negative constant and updates adversarially in backward propagation, discriminating between the source and target domains and emerging domain-invariant features. Eventually, these image features are converted into text representations via the transformer. Additionally, a paper <cit.> argues that mainstream captioning models still rely on object detectors or feature extractors to extract regional features. Therefore, they design an end-to-end detector and a feature extractor-free captioning model to simplify the process using the patch-based shifting window technique.
Although the current methods have achieved relatively good results, there are three points that can be improved. Firstly, various complex interactive relationships between instruments and tissues are important components for surgical report generation, while current methods have not explored the interactions between objects. Secondly, the current methods use a single frame of the surgical video as input to generate a report. However, considering that robotic surgery is a continuous process, temporal information is supposed to be reasonably utilized to facilitate task performance. Thirdly, most of them require additional bounding box information as input, while such annotations are expensive and inputting raw images is more practical.
Recently, graph neural networks (GNNs) have received increasing research interest because of their ability to learn non-Euclidean relations between entities <cit.> <cit.> <cit.>. Many underlying complex relationships among data in several areas of science and engineering, e.g., computer vision, molecular biology, and pattern recognition, can be represented in terms of graphs <cit.>. GNN is widely used in the above fields and has achieved good performance <cit.> <cit.>. These achievements motivate us to utilize graph learning to explore the interaction between different nodes in the robotic surgery scene graph.
To alleviate the above issues, this paper proposes the relational exploration (RE) module that allows the network to perform spatial reasoning based on features extracted from the nodes of the scene graph (as shown in Fig. 1). Besides, interaction perception (IP) module is developed to apply temporal information and combine scene graph information to learn the interactive situation of the current video frame. It can generate global attention for the RE module to decide whether model the relation between different nodes and generate local attention maps to strengthen important nodes and suppress non-interactive nodes. Moreover, an object detector is applied to the raw image to replace the input and this seems also feasible from the experimental results.
Main contributions of this study are summarized as follows:
* We devise a graph learning framework for boosting surgical report generation via interactive relation reasoning along temporal dimension.
* We propose a RE module that can learn interactive relationships between the tissue and instruments in the non-Euclidean domain to improve the accuracy of surgical report generation.
* To serve this task well with temporal information, we devise an IP module to utilize both temporal information and scene graph information to focus on important interactions and nodes.
* Experimental results on benchmark datasets show that our network clearly outperforms state-of-the-art surgical report generation methods. Even though our method does not take object bounding box as the input, our network still outperforms state-of-the-art methods, which utilizes the object bounding boxes as the input.
§ METHODS
§.§ Overview
Fig. <ref> shows the schematic illustration of our surgical report generation network.
Unlike existing surgical report generation methods taking a single image as the input, our method takes a surgical video as the input and then generates the surgical report for each video frame by exploring the interaction relations between tissues and surgical instruments.
Specifically, given a video frame I_t, we take two adjacent video frames I_t-1 and I_t-2, and employ YOLOv5 <cit.> as the object detector to detect objects from all three input video frames (i.e., I_t, I_t-1 and I_t-2).
Then, we devise a node tracking mechanism in our interaction perception (IP) module to further identify and append some missing nodes of the current video frame I_t by leveraging the object detection results of the adjacent video frames.
Moreover, we apply a feature extractor (i.e., ResNet18 <cit.> following previous work <cit.>) to extract features of each identified node and devise a RE module to leverage the graph learning for learning the interactive relation between tissues and surgical instruments.
More importantly, we devise an IP module to predict a global attention map to classify the interactive relation on the whole scene and predict a local attention map to identify the interactive relation on nodes to assist the graph learning at our RE module.
After that, we concatenate features of all graph nodes of the RE module and pass the concatenation result into the M2 transformer <cit.> for predicting the output surgical report of the current video frame I_t.
0
Subsequently, all regions of interest corresponding to nodes are input to ResNet18 <cit.> for feature extraction.
Besides, there is also a residual path to perform feature extraction on the entire image.
Then, we take the feature vectors F (F∈ R^N× C, where N and C are the numbers of detected bounding boxes and channels, respectively) with the smallest resolution and highly-semantic information (the average pooling layer in ResNet18) as the inputs for the following RE module and AP network.
The RE module can update node embeddings in the form of a scene graph through graph learning, which can help each node perceive the characteristics of related nodes. Moreover, the AP network is designed to utilize both temporal information and scene graph information to learn the interactive situation. It can generate control signals for the RE module to decide whether update node embeddings and generate attention mapsW∈ R^N× C to perform element-wise multiplication with node embeddings, which can strengthen important nodes and suppress non-interactive redundant nodes. Finally, a caption generator (M2 transformer <cit.> here) converts these node embeddings into surgical reports.
0
method part:
one paragraph to describe motivation;
followed by specific operations, with notations and equations
§.§ Relational Exploration Module
0
Recently, CNN-based models have achieved impressive results on diverse vision tasks, including image classification <cit.>, object detection <cit.>, and semantic segmentation <cit.>.
However, these methods are not suitable for object-object interaction recognition. This is mainly because it requires reasoning beyond perception, by integrating information from different objects in the non-Euclidean domain.
Therefore, inspired by GNN, we propose a Relational Exploration (RE) module to integrate feature information between different instruments and tissues in the form of a scene graph.
Recently, due to the capability of modeling
non-Euclidean relationships among entities, GNNs have achieved promising performances on diverse applications including image classification <cit.>, neural machine translation <cit.>, social relationship understanding <cit.>, and gesture recognition in robotic surgery <cit.>.
Motivated by this, we propose to model the interaction relation between tissues and surgical instruments via graph learning.
Fig. <ref> shows the schematic illustration of the proposed RE module.
The node embeddings of our graph come from the feature maps F extracted by ResNet18.
RE module will update the embeddings as
F'= σ (D^-1/2AD^-1/2 F W),
where A is the adjacency matrix of the undirected graph 𝒢 with added self-connections, D_ii = ∑_jA_ij , W is a layer-specific trainable weight matrix and σ ( ) denotes an activation function (i.e. ReLU).
By doing so, the representations of the interactions F' between different nodes can be obtained, which can effectively improve the accuracy of the generated report.
Preserving the inherent characteristic of object own is also of vital importance for this task. Because the node will exchange information with its connected nodes, the updated node embeddings are more inclined to interactive representation, which may dilute its object information. Especially for some core components, e.g., the node of tissues, it generally interacts with multiple objects, whose feature shall be disturbed by those multiple nodes. In this regard, we devise the node reservation operation, to simultaneously consider and model both inherent object representations and interaction information in the scene, which will facilitate subsequent text generation.
§.§ Interaction Perception module
Since the surgical instruments can be idle during a surgical video, it is possible that the input surgical video has one or more video frames without any interaction relation between tissues and surgical tools.
In this regard, the surgical report generation performance degrades if there is no interaction at the current video frame and we still utilize our RE module to model the node relation.
To alleviate this issue, we develop an IP module to explicitly classify whether the current video frame has an interaction relation between tissues and surgical instruments.
Node Tracking. To do so, our IP module first generates a complete scene graph for each video frame. However, since the graph of different frames of the surgical video may vary greatly, some key nodes may be missing in the scene graphs of some frames.
To alleviate this issue, we devise a node tracking mechanism to utilize temporal information to continuously track key nodes among input adjacent video frames.
As shown in Fig. 2, our IP module utilizes the object detection results of each video frame to construct a scene graph 𝒢={𝒱,ℰ,ℛ} with nodes v_i ∈𝒱, edges (v_i, r, v_j) ∈ℰ and a relation r ∈ℛ. Regarding robotic surgery, we believe that the surgical instruments that appear in different frames are constantly changing, but the surgical target needs to be continuously tracked.
As shown in Fig. 2, we track the kidney node and the tracking length is set as three video frames. When a video frame has missing nodes, it cannot form a complete scene map. Our node tracking mechanism will add the missing nodes according to scene maps of previous adjacent frames. By doing so, we can obtain a complete scene map for the video frame to assist the subsequent surgical report generation.
Global attention maps.
The interactive relation reasoning is not required along the whole surgical sequence. The timesteps when the instruments are separate from the tissues (e.g., Preparation phase in the surgery), performing the interactive modeling via graph instead inevitably brings some interferences. In this regard, we propose to only invoke the RE module after observing the actual interaction in the whole scene globally.
Specifically, once we obtain a complete scene graph of the current video frame I_t, we then obtain a feature map of each node of the scene graph by extracting deep features from the detected object corresponding to this scene graph node.
Then, we concatenate features of all nodes of the scene graph of I_t, and then pass the concatenated features F_con into a multi-layer perception block (see Fig. 2) to classify whether there is an interaction relation between tissues and surgical instruments in the whole scene of the video frame I_t.
Specifically, the multi-layer perception block applies three fully-connected layers on the concatenated features to obtain a global attention map 𝒜_global, which has only a scalar, and the scalar value can be 0 or 1.
𝒜_global = Φ_1(Φ_2(Φ_3(F_con))) ,
,
where Φ_1, Φ_2, and Φ_3 denote three fully-connected layers.
Apparently, our 𝒜_global represents whether there is an interaction result between tissues and surgical instruments in the whole scene.
0
Some studies have predicted the specific interaction r between nodes, based on their results and our attempts to find that the accuracy is not ideal, which will greatly reduce the performance of subsequent text generation. Thus, the IP module determines whether the interaction is included for each surgical frame rather than each node.
The control signal is generated by the IP module, which can cooperate with the RE module for better interactive feature representation.
It requires continuous multi-frame image global embedding and node embeddings as input, so as to combine the previous frames and the current frame to complete the judgment. Subsequent ablation experiments will also demonstrate that a control signal can contribute to this task.
It is worth mentioning that although the RE module can effectively learn the relationship between different nodes, it is actually unnecessary for surgical frames without interactive actions. The control signal judges the interaction of the surgical frame from another perspective. If the RE module is guided by the control signal, RE is performed for interactive frames, and RE is not performed for non-interactive frames. This can also increase the difference between the two graphs, making the subsequent network easier to distinguish.
0
Local attention map.
Although there are interaction relations in a whole scene view, we find that not all graph nodes are involved in these interaction relations, and the surgical report tends to focus on these involved nodes and ignore these idle nodes, which are not involved in interactions.
In this regard, apart from predicting a global attention map for the whole scene, our IP module also predicts a local attention map to assign different weights to different graph nodes, thereby boosting the surgical report generation.
Specifically, we apply another three fully-connected layers on F_con to generate a local attention map 𝒜_local, which is a vector with N (N represents the number of nodes at the graph of the RE module) elements.
𝒜_local = Φ_4(Φ_5(Φ_6(F_con))) ,
,
where Φ_4, Φ_5, and Φ_6 denote three fully-connected layers.
Apparently, our 𝒜_local represents whether there is an interaction result for all graph nodes.
0
The underlying idea is that there may be multiple nodes in the surgical scene, and the surgical procedure that really needs to be recorded may only involve some of these nodes, which means that different node embeddings of the same scene map have different degrees of importance. Therefore, we combine multi-frame node embeddings to predict the likelihood of different node interactions in the scene graph. Different from the direct prediction of the specific interaction between nodes in some previous studies, we simplified this task to predict whether there is an interaction between nodes to improve the accuracy and generate an attention map W∈ R^N× C. For caption generation, the input is obtained as follows
Input=W⊗ F + F'
On the one hand, when there is interaction in the image, it can enhance the important interactive node features, and some unnecessary node features without interaction will also be weakened, which can also reduce the interference of redundant information. On the other hand, When the image has no interaction, it can enhance all the instruments that appear in the image, which prevents omissions from the surgical report.
§.§ Implementation Details
All experiments were implemented on PyTorch and trained on an NVIDIA GeForce RTX 2080 Ti GPU with 11 GB memory. For object detection, BCEWithLogits loss and CIoU loss are empirically applied to compute the loss function.
As for the detected ROI areas, all image patches are resized to 224×224 before passing them into ResNet18.
For the training caption generation part, we adopt the CE loss and Adam optimizer <cit.> with a learning rate of 0.00006.
The learning rate is then decayed by an exponential function with a factor of 0.8 for every 10 epochs.
All models were trained with 80 epochs. The batch size is set to 50. Following previous works <cit.>, all words in each surgical report will be changed to be lowercase, and punctuation is also removed.
§ EXPERIMENTS AND RESULTS
§.§ Dataset
We evaluate the effectiveness of our method on a widely-used benchmark dataset <cit.> from 2018 MICCAI Robotic Instrument Segmentation Endoscopy Vision Challenge.
This dataset contains 15 robotic nephrectomy procedures captured on the da Vinci X or Xi system and each video (15 videos in total) has 149 frames with a spatial resolution of 1280×1024.
The surgical reports contained a total of 11 interactive relationships, including manipulating, grasping, retracting, cutting, cauterizing, looping, suctioning, clipping, ultrasound sensing, stapling, and suturing. Besides, 9 objects appeared in the dataset, and they are the kidney and 8 instruments (monopolar curved scissors, bipolar forceps, prograsp forceps, clip applier, suction, ultrasound probe, stapler, and large needle driver). These interactive relationships and object information together form scene graph representations, which are important elements of natural language description.
Following the previous works <cit.>, we remove the 13th sequence due to its few interactions, and utilize 14 surgical videos for training and validation.
And the 1st, 5th, 6th surgical videos are utilized for validation, and the remaining 11 videos are for training different methods to conduct a fair comparison.
§.§ Evaluation Metrics
To quantitatively verify the effectiveness of the proposed methods, seven commonly-used metrics for image captioning are introduced. They are BLEU-1 <cit.>, BLEU-2 <cit.>, BLEU-3 <cit.>, BLEU-4 <cit.>, METEOR <cit.>, ROUGE <cit.>, and CIDEr <cit.>.
In general, a better surgical report generation method should have larger scores of all seven metrics.
§.§ Comparisons Against State-of-the-art Methods
Quantitative comparisons. We compare our method against state-of-the-art surgical report generation methods based on deep learning, which are Xu et al. <cit.>, V-SwinMLP-TranCAP <cit.>, and CIDA <cit.>.
Among three compared methods, we can find that CIDA has the best performance of BLEU-1, BLEU-2, BLEU-3, BLEU-4, and CIDEr. They are 0.6246, 0.5624, 0.5117, 0.4720, and 2.8548, while Xu et al. has the best performance of METEOR (0.4567) and ROUGE (0.6495).
Compared to the best performing existing methods, our network obtains a BLEU-1 improvement of 11.97 %, a BLEU-2 improvement of 12.94%, a BLEU-3 improvement of 13.48%, a BLEU-3 improvement of 12.96%, a ROUGE improvement of 8.36%, and a CIDEr improvement of 36.63%, respectively.
Specifically, our method has largest BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE, and CIDEr scores, and they are 0.6994, 0.6352, 0.5807, 0.5332, 0.7038, 3.9006.
Moreover, our method takes the second rank of METEOR score, and our METEOR score is 0.4100, which is slightly smaller than the best one (0.4567).
It indicates that our network can generate more accurate surgical reports than compared state-of-the-art methods.
Note that the metric CIDEr inherently captures the sentence similarity using the notions of grammaticality, saliency, importance and accuracy (precision and recall).
Hence the CIDEr score is highly consistent with the consensus of human assessments.
From the Table I, we can find that our method has gained a huge improvement on CIDEr (36.63%).
It indicates that the report generated by our method is closer to the annotated report provided by the doctors than that of other state-of-the-art models.
0
The improvement in the last row of Table <ref> is the comparison of Ours and CIDA <cit.>. As shown in Table <ref>, our model significantly outperforms all other models on almost all evaluation metrics. It is worth noting the CIDEr metric, on which our method achieves great progress. The full name of the CIDEr is Consensus-based Image Description Evaluation, which can inherently capture the using sentence similarity, the notions of grammaticality, saliency, importance and accuracy (precision and recall). Therefore, this metric is highly consistent with the consensus of human assessments. This huge improvement on CIDEr indicates that the report generated by our model is closer to the meaning of the report provided by the doctor than other state-of-the-art models, rather than just generating some overlapping words.
Visual comparisons. Fig. <ref> visually compares the generated surgical report of our method and CIDA <cit.>.
Apparently, our method can more accurately predict the interaction operations of the surgical report since our method explicitly learns the interactive relation via graph learning.
Taking the first image of Fig. <ref> as an example, CIDA tends to predict that the ultrasound probe is idle, and our method can correctly predict the interactive relation between the prograsp forceps and tissues.
Regarding the second image in Fig. <ref>, we can find that CIDA missed that the bipolar forceps are also idle as the prograsp forceps and monopolar curved scissors. This is because the attention maps of the IP module enable our method can identify all instruments of the input surgical videos.
Regarding the 3rd image, CIDA wrongly predicts an interactive relation between bipolar forceps and kidney. By exploring the relationship between different nodes, the interactive instrument is correctly estimated in the generated report of our method.
§.§ Ablation Analysis
Effectiveness of our RE module and IP module. We further conduct ablation study experiments to validate the effectiveness of our RE module and our IP module.
To do so, we construct a baseline (denoted as “Basic”) by removing our RE module and our IP module from our network, and then add the RE module and the IP module into “Basic” to build another two networks, which are denoted as “Basic+RE” and “Basic+IP”.
As shown in Table <ref>, “Basic+RE” and “Basic+IP” has a better metric performance than “Basic” in terms of all seven metrics, which demonstrates that the RE module and the IP module can improve the surgical report generation performance of our method.
Moreover, by adding the RE module and the IP module together, our method can generate a more accurate surgical report due to our superior metric results over “Basic+RE” and “Basic+IP”.
Effectiveness of key components in IP module. As shown in Fig. <ref>, we in our IP module devise has a node tracking mechanism to adding the possible missing nodes of the scene graph of the input video frame, and a global attention map on the whole scene, and a local attention map on the graph nodes of the RE module.
To further evalute the effectiveness of the node tracking mechanism, and the global attention map, and the local attention map, we conduct another ablation study experiment.
Here we construct five baseline networks (see M1 to M5 of Table <ref>), which are reconstructed by only modifying the IP module of our network. It means that all these five baseline networks are build on “Basic+RE” (see Table <ref>).
Table <ref> reports the BLEU-1, BLEU-2, BLEU-3, BLEU-4, METEOR, ROUGE, and CIDEr scores of our method and five baseline networks.
Apparently, M2, M3, and M4 has larger scores on all seven metrics than M1, which means that each component of the node tracking mechanism, the global attention, and the local attention in our IP module enables our network to generate a more accurate surgical report.
Moreover, exploring both the node tracking mechanism and the global attention map together (i.e., M5) in our IP module incurs a performance gain than that with only the node tracking mechanism (i.e., M2), due to the superior performance of M5 over M2. It demonstrates that the global attention on the whole scene of the input video frame on the RE module helps our method to generate a more accurate surgical report.
More importantly, our method has larger BLEU-1, BLEU-2, BLEU-3, BLEU-4, METEOR, ROUGE, and CIDEr scores than M5, which indicates that incorporating the local attention map on graph nodes in our IP module also boost the surgical report generation performance of our network.
0
It should be mentioned that when the method in the table is Baseline+AP, it means that the node tracking and attention map in the IP will work, except for the control signal. This is because the control signal of IP needs to collaborate with the RE module, and according to Table <ref>, when RE and IP work together, this model achieves the best results on all evaluation metrics.
Table <ref> presents the ablation results for the three functions of the IP network, which demonstrate the effectiveness of node tracking, control signal and attention map respectively.
0
§.§ More analysis
Table <ref> compares the mAP scores of different object detectors (i.e., “SSD w LS”, YOLOv3, SSD, YOLOv5s).
From the mAP results, we can find that YOLOv5s achieves the best mAP performance, although YOLOv5s is lightweight and has a fast inference speed.
Hence, we empirically utilize YOLOv5s as the object detector of our network.
As can be seen from Fig. 3, the object detector can effectively predict the locations of the ROI areas, Taking the surgical frame in the first column as an example, three of the four objects can be accurately detected. Even if the bipolar forceps in the top left corner of the image is mistaken for prograsp forceps, it has no effect on the subsequent feature extraction of the ROI area. This is the reason it can effectively replace the extra bounding box input.
According to the baseline model in Table <ref>, that is, the workflow without RE module and IP module. It can be noticed that the performance of baseline and CIDA <cit.> is very close, which also proves that the proper application of the object detector can eliminate the dependence on the bounding box annotation.
§ CONCLUSION AND FUTURE WORK
This work presents a new surgical report generation method by exploring the interactive relation between tissues and instruments via graph learning.
Our key idea is to devise a RE module to leverage temporal information to model interactive relations, and devise an IP module to assist the graph learning in RE module.
The IP module has a node tracking system can identify and append missing nodes of the current video frame for assisting the graph network construction in RE module.
Moreover, the IP module generates a global attention map to indicate the existence of the interactive relation on the whole scene of the current video frame, and a local attention map to perceive the interactive relation on each graph node of the RE module.
By doing so, the graph updating in the RE module will be more accurate, thereby enhancing the surgical report generation accuracy.
Experimental results on 2018 MICCAI Endoscopic Vision Challenge Dataset show that our network clearly outperform existing state-of-the-art surgical report generation methods.
In the future, we plan to consider incorporating multi-modality information, such as kinematics to facilitate report generation. In addition, we also plan to collect more data to extend our method to multiple interaction points of multiple human organs.
0
In this work, we focused on how to improve the accuracy of generated surgical reports and proposed a model with an RE module and IP module to improve performance. Since the relationship between scene graph nodes has a significant impact on the generation of surgical reports, the RE module is proposed to learn the relation between different instruments and tissues. It can perform spatial reasoning and facilitate a deeper semantic understanding of image contents, thereby enhancing the description of the interactions in the generated surgical report. In addition, the IP module is designed that combines temporal information and scene graph information to perceive interaction. There are three specific functions: tracking key nodes, generating a control signal to collaborate with RE modules, and generating attention maps to strengthen the nodes that need to be described. Our model performs best on the 2018 MICCAI Endoscopic Vision Challenge Dataset, and ablation experiments demonstrated the effectiveness of the RE module and IP module, respectively. In the future, we plan to populate deep learning frameworks with human prior knowledge to enable structured and robust report generation.
[sorting = none]
|
http://arxiv.org/abs/2306.10255v1
|
20230617044115
|
The First GECAM Observation Results on Terrestrial Gamma-ray Flashes and Terrestrial Electron Beams
|
[
"Y. Zhao",
"J. C. Liu",
"S. L. Xiong",
"W. C. Xue",
"Q. B. Yi",
"G. P. Lu",
"W. Xu",
"F. C. Lyu",
"J. C. Sun",
"W. X. Peng",
"C. Zheng",
"Y. Q. Zhang",
"C. Cai",
"S. Xiao",
"S. L. Xie",
"C. W. Wang",
"W. J. Tan",
"Z. H. An",
"G. Chen",
"Y. Q. Du",
"Y. Huang",
"M. Gao",
"K. Gong",
"D. Y. Guo",
"J. J. He",
"B. Li",
"G. Li",
"X. Q. Li",
"X. B. Li",
"J. Y. Liao",
"J. Liang",
"X. H. Liang",
"Y. Q. Liu",
"X. Ma",
"R. Qiao",
"L. M. Song",
"X. Y. Song",
"X. L. Sun",
"J. Wang",
"J. Z. Wang",
"P. Wang",
"X. Y. Wen",
"H. Wu",
"Y. B. Xu",
"S. Yang",
"B. X. Zhang",
"D. L. Zhang",
"F. Zhang",
"P. Zhang",
"H. M. Zhang",
"Z. Zhang",
"X. Y. Zhao",
"S. J. Zheng",
"K. K. Zhang",
"X. B. Han",
"H. Y. Wu",
"T. Hu",
"H. Geng",
"H. B. Zhang",
"F. J. Lu",
"S. N. Zhang",
"H. Yu"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.EP",
"astro-ph.IM"
] |
Y. Zhao1,2, J. C. Liu2,3, S. L. Xiong2, W. C. Xue2,3, Q. B. Yi4,2, G. P. Lu5, W. Xu6, F. C. Lyu7, J. C. Sun2, W. X. Peng2, C. Zheng2,3, Y. Q. Zhang2,3, C. Cai8, S. Xiao9,10, S. L. Xie11, C. W. Wang2,3, W. J. Tan2,3, Z. H. An2, G. Chen2, Y. Q. Du12,2, Y. Huang2, M. Gao2, K. Gong2, D. Y. Guo2, J. J. He2, B. Li2, G. Li2, X. Q. Li2, X. B. Li2, J. Y. Liao2, J. Liang12,2, X. H. Liang2, Y. Q. Liu2, X. Ma2, R. Qiao2, L. M. Song2, X. Y. Song2, X. L. Sun2, J. Wang2, J. Z. Wang2, P. Wang2, X. Y. Wen2, H. Wu12,2, Y. B. Xu2, S. Yang2, B. X. Zhang2, D. L. Zhang2, F. Zhang2, P. Zhang13,2, H. M. Zhang2, Z. Zhang2, X. Y. Zhao2, S. J. Zheng2, K. K. Zhang14, X. B. Han14, H. Y. Wu15, T. Hu15, H. Geng15, H. B. Zhang16, F. J. Lu2, S. N. Zhang2, H. Yu1
1Department of Astronomy, Beijing Normal University, Beijing 100875, Beijing, China
2Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, Beijing, China
3University of Chinese Academy of Sciences, Beijing 100049, Beijing, China
4School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, Hunan, China
5School of Earth and Space Sciences, University of Science and Technology of China, Hefei 230026, Anhui, China
6Electronic Information School, Wuhan University, Wuhan 430072, Hubei, China
7Key Laboratory of Transportation Meteorology of China Meteorological Administration, Nanjing Joint Institute for Atmospheric Sciences, Nanjing 210000, Jiangsu China
8College of Physics and Hebei Key Laboratory of Photophysics Research and Application, Hebei Normal University, Shijiazhuang, Hebei 050024, China
9Guizhou Provincial Key Laboratory of Radio Astronomy and Data Processing, Guizhou Normal University, Guiyang 550001, GuiZhou, China
10School of Physics and Electronic Science, Guizhou Normal University, Guiyang 550001, GuiZhou, China
11Institute of Astrophysics, Central China Normal University, Wuhan 430079, HuBei, China
12School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, SiChuan, China
13College of Electronic and Information Engineering, Tongji University, Shanghai 201804, Shanghai, China
14Innovation Academy for Microsatellites of Chinese Academy of Sciences, Shanghai 201304, Shanghai, China
15National Space Science Center, Chinese Academy of Sciences, Beijing 100190, Beijing, China
16Key Laboratory of Middle Atmosphere and Global Environment Observation, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, Beijing, China
S. L. [email protected]
* During 9-month observation, GECAM has detected 147 bright TGFs, 2 typical TEBs and 2 special TEB-like events.
* With novel detector design, GECAM can effectively classify TGFs and TEBs, and reveal their fine temporal features.
* We obtained a very high TGF-lightning association rate (∼80%) between GECAM and GLD360 in east Asia region.
Gravitational-wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM) is a space-borne instrument dedicated to monitoring high-energy transients, including Terrestrial Gamma-ray Flashes (TGFs) and Terrestrial Electron Beams (TEBs). We implemented a TGF/TEB search algorithm for GECAM, with which 147 bright TGFs, 2 typical TEBs and 2 special TEB-like events are identified during an effective observation time of ∼9 months. We show that, with gamma-ray and charged particle detectors, GECAM can effectively identify and distinguish TGFs and TEBs, and measure their temporal and spectral properties in detail. A very high TGF-lightning association rate of ∼80% is obtained between GECAM and GLD360 in east Asia region.
§ PLAIN LANGUAGE SUMMARY
Terrestrial gamma-ray flashes (TGFs) and Terrestrial Electron Beams (TEBs) represent the most energetic radioactive phenomena in the atmosphere of the Earth. They reflect a natural particle accelerator that can boost electrons up to at least several tens of mega electron volts (MeV) and produce gamma-ray radiation. With novel detection technologies, GECAM is a new powerful instrument to observe TGFs and TEBs, as well as study their properties. For example, it is difficult for most space-borne high-energy instruments to distinguish between TGFs and TEBs. However, we show here that, with the joint observation of gamma-ray and charged particle detectors, GECAM can effectively identify TGFs and TEBs. GECAM can also reveal their fine features in the light curves and spectra.
§ INTRODUCTION
Terrestrial Gamma-ray Flashes (TGFs) are submillisecond intense bursts of γ-rays with energies up to several tens of MeV <cit.>, which was serendipitously discovered by CGRO/BATSE in 1991 <cit.>. Since then, TGFs have been routinely observed by space-borne instruments, such as BeppoSAX <cit.>, RHESSI <cit.>, AGILE <cit.>, Fermi/GBM <cit.> and ASIM <cit.> during last three decades. TGFs can also be observed by ground-based instruments <cit.>.
TGFs observed by these space-borne instruments are widely believed to be produced through the initial upward leader of positive Intracloud (+IC) lightning <cit.>. They are the results of relativistic electrons that produce hard X/γ-rays through the bremsstrahlung process. These electrons are accelerated in a high electric field by the runaway process <cit.> and multiplied by many orders of magnitude through the Relativistic Runaway Electron Avalanche process <cit.>. Two main models were proposed to explain the production of TGFs. One is the lightning leader model, which involves the acceleration of free electrons under the localized electric field in front of lightning leader tips <cit.>. The other one is the Relativistic Feedback Discharge (RFD) model <cit.>, which considers the feedback processes from positrons and photons in a large-scale electric field region. However, the specific mechanism to produce ∼10^17 to 10^19 electrons is still an open question <cit.>.
By interacting with atmosphere during propagation, TGF photons can produce secondary electrons and positrons. Then they will move along the Earth's magnetic field line, forming Terrestrial Electron Beams (TEBs) <cit.>, which could be observed by some TGF-detecting instruments <cit.>.
In this study, the data of Gravitational-wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM) <cit.> are utilized for TGFs and TEBs research. GECAM is a space-based instrument dedicated to the observation of gamma-ray electromagnetic counterparts of Gravitational Waves and Fast Radio Bursts, as well as other high-energy astrophysical and terrestrial transients, such as Gamma-ray Bursts (GRBs) <cit.>, Soft Gamma-ray Repeaters (SGRs), Solar Flares, TGFs and TEBs.
§ INSTRUMENT AND SEARCH ALGORITHM
Since launched in December 2020, GECAM has been operating in low earth orbit with 600 km altitude and 29^∘ inclination angle <cit.>. GECAM consists of twin micro-satellites (i.e. GECAM-A and GECAM-B) and each of them comprises 25 Gamma-ray Detectors (GRDs) <cit.> and 8 Charged Particle Detectors (CPDs) <cit.>. Each GRD has a geometric area of ∼45 cm^2 (round shape with diameter 7.6 cm) and an on-axis effective area of ∼21 cm^2 for 1 MeV gamma-rays <cit.>, while each CPD has a geometric area of 16 cm^2 (square shape with 4.0 cm×4.0 cm) and an on-axis effective area of ∼16 cm^2 for 1 MeV electron <cit.>. Considering different orientations of 25 GRDs and 8 CPDs for each GECAM satellite, total effective area of GRDs and CPDs depend on the incident angle. For the incident direction from GECAM's boresight, total effective area of 25 GRDs is ∼440 cm^2 for 1 MeV gamma-rays, while that of 8 CPDs is ∼20 cm^2 for 1 MeV electrons. Note that only GECAM-B data are utilized here because GECAM-A has not been able to observe yet <cit.>.
With LaBr_3 crystals read out by silicon photomultiplier (SiPM) arrays, GRDs can detect high-energy photons in a broad energy range of ∼15 keV to ∼5 MeV <cit.>. CPDs are designed to detect the charged particles (including electrons and positrons) from ∼100 keV to ∼5 MeV. The joint observation of GRDs and CPDs can distinguish between gamma-rays and charged particle bursts, e.g. TGFs and TEBs <cit.>.
For GRD, the dead time is 4 μs for normal events and >69 μs for overflow events (i.e. events with higher energy deposition than the maximum measurable energy). Dead time can lead to fewer observed counts, resulting in an underestimation of TGFs' duration and obscuring short TGFs. Each GRD has two read-out channels: high-gain channel (∼15 keV–∼300 keV) and low-gain channel (∼300 keV–∼5 MeV) <cit.>. The design, performance, and other information about GECAM have been reported by GEC_INS_Li2022, GEC_INS_An2022, GEC_INS_Xv2021.
The considerable number of GRDs is helpful to locate source region of TGFs. We have proposed a dedicated localization method for all-sky monitor which can be used for extremely short-duration TGFs <cit.>. Despite the limited counting statistics of TGFs, GECAM is capable of roughly determining the location of TGF candidates, although the error is large <cit.>.
To detect those extremely short and bright bursts, e.g. TGFs and TEBs, a dedicated anti-saturation data acquisition system (DAQ) is designed for GECAM. The data buffer in DAQ can accommodate up to 4092 and 1020 counts for the high-gain channel and low-gain channel of each GRD, respectively. Since there are usually only several hundreds of counts registered for bright TGFs, GECAM's DAQ can guarantee to transfer and save almost all TGFs counts that are recorded by GECAM detectors <cit.>.
As the main contamination source for TGFs, cosmic-ray events show very similar patterns in data as TGFs, but with an even shorter duration. Thanks to GECAM's high time resolution, i.e. 100 ns <cit.>, GECAM can effectively distinguish between cosmic-ray events and TGFs. Indeed, a dedicated data product called Simultaneous Events is designed for GECAM. The Simultaneous Events Number (SimEvtNum) is defined as events number from different detectors registered in the same 300 ns time window <cit.>. As the SimEvtNum increases, the probability of these events caused by cosmic-rays surges. Thus the events marked with SimEvtNum≥13 are not utilized in the searching, as they may be the result of cosmic rays.
To unveil TGFs and TEBs in GECAM data, we developed a dedicated burst search algorithm, which is different from normal burst search for GRBs <cit.>, because TGFs and TEBs are so weak that only a few counts are registered in each detector, and both GRDs and CPDs are needed in searching. The event-by-event (EVT) data of GECAM GRDs and CPDs are used in this study. Only recommended normal events with SimEvtNum<13 are utilized. We divide 25 GRDs into four groups considering the neighboring position, resulting in three groups with six GRDs and one with seven GRDs. All 8 CPDs are treated as a single group.
Assuming the background follows the Poisson distribution, the probability that the counts are from background fluctuation can be calculated as:
P_ group(S ≥ S^'|B) = 1 - ∑_S=0^S=S^'-1 B^S·exp(-B) / S! ,
where S and S^' are observed counts and threshold counts, respectively, for one group in a time window, B is the estimated background for the time window calculated by the average counts over T_ rela∈ [-5,-1] s and ∈ [+1,+5] s, where T_ rela is relative time regarding the end time of the time window.
For a given search bin, we calculate the joint probability of N_trig^' or more groups out of a total of M groups surpassing the trigger threshold for a single group. This joint probability (P_bin) can be given by:
P_bin(N_trig≥ N_trig^') = ∑_N_trig = N_trig^'^N_trig = MN_trigM· ( P_ group )^ N_trig· ( 1 - P_ group )^ M - N_trig .
Here, seven time scales are utilized for searching. The widths of time scales with the corresponding empirical threshold P_bin are: 50 μs (5.0×10^-22), 100 μs (2.0×10^-21), 250 μs (1.3×10^-20), 500 μs (5.0×10^-20), 1 ms (2.0×10^-19), 2 ms (8.0×10^-19), 4 ms (3.2×10^-18). For instance, we required ≥2 GRD groups to have ≥8 counts each in a 100 μs time bin, which corresponds to a P-value of ∼7.3×10^-12 for one group with background level of 400 counts/s for one GRD. Considering the joint probability (Equation 2), the P-value for a given search bin was calculated to be 2.0×10^-21. All time scales are used for TGF search, while only the last four are used for TEB search. These empirical criteria are relatively strict so that only intense TGFs or TEBs could be identified.
We can derive the trigger threshold for a group of GRDs, P_group,GRD, using P_bin by setting M=4 and N_trig,GRD^'=2:
P_bin(N_trig≥ 2) = 6 · P_group,GRD^2 - 8 · P_group,GRD^3 + 3 · P_group,GRD^4.
Similarly, we can derive the trigger threshold for TEBs with CPDs, P_group,CPD, using P_bin by setting M=1 and N_trig,CPD^'=1:
P_bin = P_group,CPD .
For candidates to be identified as TGFs/TEBs, all criteria below must be met:
1. The trigger threshold (Equations <ref> and <ref>) must be satisfied.
2. Candidates should not be SGRs. Note that millisecond-duration SGRs can be searched in the time scale of milliseconds with a much softer spectrum than TGFs.
3. Should not be caused by instrument effects, which are characterized by that there is significant excess (Poisson significance >6 σ) registered in 2 to 3 GRDs while no obvious signals (Poisson significance <3 σ) for most (i.e. >21) GRDs.
4. For filtering out cosmic-rays, ratio of the simultaneous event (R_sim,7[R_sim,7: total simultaneous events number registered in >7 GRDs, divided by total events number in the searching bin.]) should be <20%.
For the identification of TEBs, more criteria are needed which will be described in Section 4. To further illustrate the capability of GECAM to identify cosmic-rays, a case is illustrated in the supplementary material in Supporting Information.
§ GECAM TGFS
From December 10th, 2020 to August 31st, 2022, the effective observation time of GECAM-B is ∼274.5 days (∼9 months or ∼0.75 years). As shown in Figure <ref>, 147 TGFs are identified by our search algorithm, corresponding to a discovery rate of ∼200 TGFs/year or ∼0.54 TGFs/day. We note that this TGF sample only contains bright ones, resulting from the strict searching threshold. Therefore, GECAM's TGF discovery rate would increase as we decrease the search threshold in the future.
The Global Lightning Dataset (GLD360) is utilized to match lightning for GECAM TGFs in the time window of ± 5 ms corrected for light propagation time and within the distance window of 800 km from GECAM nadirs. The GLD360 lightning-association ratio is 34/41≈ 80% in the east Asia region (EAR, 77^∘ E–138^∘ E, 13^∘ S–30^∘ N) which is ∼2.5 times of results based on data of the other space-borne instruments and the World Wide Lightning Location Network (WWLLN) lightning (∼33%) <cit.>. The high lightning-association ratio may be attributed to two factors: (1) The detection efficiency of GLD360 is higher than the other lightning location network <cit.>. TGF_GBM_Mailyan2020 have also confirmed that using GLD360 lightning data significantly improves the association ratio between Fermi/GBM TGFs and sferics. (2) This GECAM sample only contains bright TGFs, and their associated lightning strokes maybe brighter. As shown in Figure <ref>c, most of the time offsets (corrected for propagation time) between GECAM TGFs and their associated lightning are centered around ±2 ms. Distances between GECAM nadirs and their associated lightning range from ∼50 to ∼800 km. These time offsets and distances are consistent with previous reports, although the chance probability is ∼2.7% higher than previous studies using WWLLN dataset <cit.> due to the high detection efficiency of GLD360. With 41 TGFs in the east Asia region, there would be ∼1.1 false associations. However, if we only consider associations within ±2 ms, the probability of chance associations is ∼1.1%, resulting in only ∼0.4 false associations with the 41 TGFs. Since 31 out of 34 lightning events are centered at ±2 ms of TGFs time, we conclude that most of the associated lightning events are genuine matches.
The statistical distribution of temporal, intensity and energy properties of GECAM TGFs are shown in Figure <ref>. The duration is calculated by the Bayesian Block (BB) algorithm <cit.>. The distribution of GECAM TGFs' duration is centered around ∼200 μs (see Figure <ref>a). We note that the proportion of GECAM TGFs with extremely short duration (i.e., <40 μs) is less than that observed by ASIM <cit.>, which may be due to the strict searching threshold, although different instruments' duration cannot be compared directly. Figure <ref>c demonstrates that TGFs with shorter duration typically exhibit a harder spectrum, which is consistent with previous observations <cit.>. The pulse pile-up effect of Fermi/GBM can reduce observed counts and make the measured spectrum harder <cit.>. Similarly, GECAM's pulse pile-up effect also has such an impact. Therefore, this phenomenon might be partly due to the pile-up effect. It appears that there are some TGFs with relatively soft spectra below the diagonal line. However, these TGFs also satisfy that short-duration TGFs have hard spectra. Since the sample number is limited, we will investigate this phenomenon further as the number of sample increases. As shown in Figure <ref>d, the duration and CPD/GRD counts ratio is effective to classify TGFs and TEBs (see Section 4).
In Figure <ref>, light curves and time-energy scatter plots are illustrated for three multipeak, three bright, and two short TGFs. Note that the count clusters around ∼4000 keV are located at the GRDs' saturated peak. These events' recorded energy is inaccurate. This is primarily due to electronics' saturation, which leads to a signal cutoff at some stage. The pulse pile-up effect may also come into play. These effects related to the saturation peak are still under study.
It is worth noticing an interesting double-peaked TGF (Figure <ref>a) that is characterized by two ∼100 μs pulses with very similar temporal and spectral structures. Two possible scenarios may explain this double-peaked TGF. For the first, it is accepted that the upward leader channel of a lightning discharge could branch during propagation <cit.>. We speculate that such branching may reflect the complicate electric field distribution, which may result in multiple or overlapping pluses in a TGF. It could be also responsible for cases shown in Figure <ref>b and <ref>c. However, this double-peak TGF (Figure <ref>a) may require coincidences comparing to other TGFs in Figure <ref>b to <ref>c, i.e. two intracloud electric fields with similar distribution on the passageway of these upward leader channels. For the second, it could be associated with two successive steps of one propagating channel. We note that the time interval between the two pulses of this double-peak TGF is generally consistent with the typical duration of the stepped leader's step, i.e., ∼0.1 ms <cit.>. Meanwhile, the typical length of leader steps during intracloud lightning discharge is from several hundred meters to several kilometers <cit.>. Therefore, the second pulse of this TGF was also likely generated after the initial leader (which resulted in the first pulse) propagated forward for one or several more steps.
The soft tail, which is caused by multiple Compton scattering of photons that makes photons arrive slightly later, is an important feature of TGFs <cit.>e.g.,>TGF_RSC_Xu2019. The energy band of high-gain channel of GRDs could be down to ∼15 keV, which is efficient to charaterize these tails (see Figure <ref>d to <ref>f).
The existing models have shown a general correlation between gamma-ray production and intense electric field distribution <cit.>, while these models do not fully account for the intrinsic complexity of the electric field driving mechanisms. Whether the light curve structure of TGFs detected by GECAM is related to the specific distribution of intense electric field merits further investigation with these models. Furthermore, some extreme short-duration (down to 20 μs) TGFs are found, as shown in Figure <ref>g to <ref>h.
§ GECAM TEBS AND TWO SPECIAL EVENTS
Here, we first present two high-confidence TEBs, as shown in Figure <ref>a, Figure <ref>d, Figure <ref>a and <ref>b. GECAM CPDs are mostly used to detect electrons and positrons in orbit, since it has low detection efficiency to gamma-ray <cit.>. Although TEBs can also produce many counts in GRDs as TGFs, their duration and CPD/GRD counts ratio are remarkably different from TGFs. To distinguish between TGFs and TEBs, we find a very effective criteria considering the duration and CPD/GRD counts ratio (see Figure <ref>d). It is explicitly shown in Figure <ref>d that TEBs and TGFs are separated into two groups according to duration and CPD/GRD counts ratio. Note that the negative values of the CPD/GRD counts ratio mean no significant excess counts registered in CPDs. The duration of TGFs (<1 ms) and TEBs (>2 ms) are also distinctively different.
In addition to these two high-confidence TEBs above, GECAM-B also detected two special events (see Figure <ref>c and <ref>d). Based on the criteria presented in Figure <ref>d, they could be classified as "TEBs". However, their slow-rise light curves deviate from the characteristics of previously reported TEBs <cit.>, although the third and fourth pulse of Figure <ref>c seem to display a fast-rise light curve of typical TEBs.
Particularly, the special event in Figure <ref>c consists of quadruple pulses and was detected by GECAM-B over the Southwest Indian Ocean at 18:34:40.551997 UTC on September 11th, 2021. Following previous TEB studies, we trace the geomagnetic line using the International Geomagnetic Reference Field (IGRF-13) model <cit.>, since TEB electrons and positrons will travel along the Earth's magnetic field lines. There is no lightning activity around the GECAM-B nadir (51.2^∘ E, 28.9^∘ S, 587.8 km) and the southern magnetic footpoint (52.8^∘ E, 31.3^∘ S, 40 km, 37379 nT) within ±1 minute and a radius of 1200 km (see Figure <ref>e). The GECAM-B is relatively close to the southern magnetic footpoint, with a magnetic line path length of ∼600 km (see Figure <ref>f). But, there is a cluster of WWLLN lightning around the northern magnetic footpoint (44.1^∘ E, 45.5^∘ N, 40 km, 50129 nT) within ±10 seconds and a radius of 400 km. Therefore, we think that the electrons and positrons should originate from the vicinity of the northern footpoint. We note that the magnitude of geomagnetic field given by the IGRF-13 model at the northern footpoint (at 40 km altitude) is higher than that of the southern footpoint, thus there should be not return peak for this event.
The time intervals between each neighboring pulse in the quad-peaked event are comparable, i.e., ∼169 ms, ∼175 ms, and ∼172 ms, respectively. We note that there are cases in the Fermi/GBM TGF sample where the time interval between two TGFs is approximately hundreds of milliseconds <cit.>. It is possible that there are quadruple or more neighboring TGFs with similar time intervals. Indeed, while examining the lightning dataset for other time than this special TEB-like event, we find that there are some lightning processes consisting of four lightning strokes with waiting time of ∼160 ms to ∼180 ms. These lightning strokes either originate from the same location (within location error) or from within a small region of ∼30 km. We speculate that the quad-peaked event may be produced by such kind of lightning process around the northern footpoint. If this TEB-like event is from four TGFs, they should have some connections, e.g. the periodic TGFs <cit.>, and the distance between these four TGFs should be not very far, otherwise they would not be detected as a single TEB-like event by GECAM-B. Besides, the production and propagation mechanisms of this TEB require more investigation to explain the atypical light curve.
It is also possible that it represents a new, unidentified class of event. Therefore, based on our current knowledge, we classify these two events as special TEB-like events. Detailed analysis will be reported in a forthcoming work.
§ CONCLUSION
With novel designs on detectors and electronics, GECAM is a new powerful instrument to detect and identify TGFs and TEBs, as well as study their temporal and spectral properties. Thanks to the high time resolution (100 ns), broad detection energy range (∼15 keV to ∼5 MeV) and anti-data-saturation designs, GECAM can record very bright TGFs and TEBs, and reveal their fine structures in light curves and spectrum, which can help us better understand the production mechanism of TGFs and TEBs.
In this paper, a dedicated search algorithm of TGF and TEB has been implemented for GECAM, which results in 147 bright TGFs, 2 typical TEBs and 2 special TEB-like events in ∼9 months of data. TGF detection rate for GECAM-B is ∼200 TGFs/year, which will increase if we loose the search threshold. A very high TGF-lightning association rate of ∼80% is obtained between GECAM and GLD360 in east Asia region. Some interesting TGFs are found, such as a double-peak TGF with very similar temporal and spectral distribution.
For most gamma-ray space telescopes, disentangling TEBs usually rely on the 511 keV line feature in the spectrum or the return peak in light curve. With joint observation of GRDs and CPDs, GECAM can distinguish between TGFs and TEBs according to the duration distribution and CPD/GRD counts ratio.
Interestingly, GECAM discovered two special TEB-like events, and one of them has quadruple peaks which probably originated from a special lightning discharge process. The nature of these TEB-like events remains to be revealed which requires a dedicated in-depth study. This kind of events may shed new light on the TGF and TEB mechanism.
§ OPEN RESEARCH
All data that are used to produce the figures in this paper have been uploaded to Zenodo with DOI: 10.5281/zenodo.8028217 (<https://zenodo.org/record/8028217>) <cit.>, available under Creative Commons Attribution 4.0 International License. The World Wide Lightning Location Network (WWLLN) and GLD360 data used in this paper are also available from the Zenodo repository. The authors wish to thank the WWLLN, a collaboration among over 50 universities and institutions, for providing the lightning location data used in these datasets and in the paper. Additional WWLLN data are available at a nominal cost from <http://wwlln.net>. Researchers may contact Vaisala at <https://www.vaisala.com/en/lp/contact-us-lightningsolutions> to arrange research use of additional GLD360 data <cit.>.
The GECAM (HuaiRou-1) mission is supported by Strategic Priority Research Program on Space Science of Chinese Academy of Sciences, China. We thank the support from National Key R&D Program of China (2021YFA0718500), Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (Grant No. XDA15360102, XDA15360300, XDA15052700), National Natural Science Foundation of China (Grant No. 12273042, 12173038, 42274205, U1938115, U2038106), National HEP Data Center (Grant No. E029S2S1) and the open fund of Hubei Luojia Laboratory (Grant No. 220100051). We thank Xi Long (Harvard University) for helpful discussions. GLD360 data used in this paper belong to Vaisala Inc who supports the ASIM project. The authors wish to thank the World Wide Lightning Location Network (<http://wwlln.net>) as a collaboration of more than 50 universities.
[
Alken
.
Alken
.
2021
]
TEB_IGRF13_Alken2021
TEB_IGRF13_Alken2021
Alken, P.
, Thébault, E.
, Beggan, CD.
, Amit, H.
, Aubert, J.
, Baerenzung, J.
others
2021.
International geomagnetic reference field: the
thirteenth generation International geomagnetic reference field: the
thirteenth generation.
Earth, Planets and Space7311–25.
[
An
.
An
.
2022
]
GEC_INS_An2022
GEC_INS_An2022
An, ZH.
, Sun, XL.
, Zhang, DL.
, Yang, S.
, Li, XQ.
, Wen, XY.
Zhou, X.
2022.
The design and performance of GRD onboard the GECAM
satellite The design and performance of grd onboard the gecam
satellite [Journal Article].
Radiation Detection Technology and
Methods6143-52.
<https://doi.org/10.1007/s41605-021-00289-y>
10.1007/s41605-021-00289-y
[
Belz
.
Belz
.
2020
]
TGF_GND_Belz2020
TGF_GND_Belz2020
Belz, JW.
, Krehbiel, PR.
, Remington, J.
, Stanley, MA.
, Abbasi, RU.
, LeVon, R.
Zundel, Z.
202012.
Observations of the Origin of Downward Terrestrial
Gamma-Ray Flashes Observations of the Origin of Downward Terrestrial
Gamma-Ray Flashes.
Journal of Geophysical Research:
Atmospheres12523e31940.
10.1029/2019JD031940
[
Bhat
.
Bhat
.
2014
]
INS_EFF_Bhat2014
INS_EFF_Bhat2014
Bhat, PN.
, Fishman, GJ.
, Briggs, MS.
, Connaughton, V.
, Meegan, CA.
, Paciesas, WS.
Xiong, S.
201411.
Fermi gamma-ray burst monitor detector performance at
very high counting rates Fermi gamma-ray burst monitor detector
performance at very high counting rates.
Experimental Astronomy381-2331-357.
10.1007/s10686-014-9424-z
[
Briggs
.
Briggs
.
2010
]
TGF_GBM_Briggs2010
TGF_GBM_Briggs2010
Briggs, MS.
, Fishman, G.
, Connaughton, V.
, Bhat, P.
, Paciesas, W.
, Preece, R.
others
2010.
First results on terrestrial gamma ray flashes from the
Fermi Gamma-ray Burst Monitor First results on terrestrial gamma ray
flashes from the fermi gamma-ray burst monitor.
Journal of Geophysical Research: Space
Physics115A7.
[
Briggs
.
Briggs
.
2013
]
TGF_GBM_Briggs2013
TGF_GBM_Briggs2013
Briggs, MS.
, Xiong, S.
, Connaughton, V.
, Tierney, D.
, Fitzpatrick, G.
, Foley, S.
others
2013.
Terrestrial gamma-ray flashes in the Fermi era: Improved
observations and analysis methods Terrestrial gamma-ray flashes in the
fermi era: Improved observations and analysis methods.
Journal of Geophysical Research: Space
Physics11863805–3830.
[
Cai
.
Cai
.
2021
]
HXM_SEA_Cai2021
HXM_SEA_Cai2021
Cai, C.
, Xiong, SL.
, Li, CK.
, Liu, CZ.
, Zhang, SN.
, Li, XB.
Zhou, DK.
202109.
Search for gamma-ray bursts and gravitational wave
electromagnetic counterparts with High Energy X-ray Telescope of
Insight-HXMT Search for gamma-ray bursts and gravitational wave
electromagnetic counterparts with High Energy X-ray Telescope of
Insight-HXMT.
Monthly Notices of the Royal Astronomical
Society50833910-3920.
<https://doi.org/10.1093/mnras/stab2760>
10.1093/mnras/stab2760
[
Cai
.
Cai
.
2022
]
HXM_SEA_Cai2023
HXM_SEA_Cai2023
Cai, C.
, Xiong, SL.
, Xue, WC.
, Zhao, Y.
, Xiao, S.
, Yi, QB.
Zhang, F.
202210.
Burst search method based on likelihood ratio in
Poisson statistics Burst search method based on likelihood ratio in
Poisson statistics.
Monthly Notices of the Royal Astronomical
Society51822005-2014.
<https://doi.org/10.1093/mnras/stac3075>
10.1093/mnras/stac3075
[
Celestin
Pasko
Celestin
Pasko
2011
]
TGF_TEO_Celestin2011
TGF_TEO_Celestin2011
Celestin, S.
Pasko, VP.
2011.
Energy and fluxes of thermal runaway electrons produced
by exponential growth of streamers during the stepping of lightning leaders
and in transient luminous events Energy and fluxes of thermal runaway
electrons produced by exponential growth of streamers during the stepping of
lightning leaders and in transient luminous events.
Journal of Geophysical Research: Space
Physics116A3.
[
Celestin
, Xu
Pasko
Celestin
.
2013
]
TGF_LIG_Celestin2013
TGF_LIG_Celestin2013
Celestin, S.
, Xu, W.
Pasko, V.
2013.
Spectra of X-ray and Gamma-ray Bursts Produced by
Stepping Lightning Leaders Spectra of x-ray and gamma-ray bursts produced
by stepping lightning leaders.
EGU General Assembly Conference Abstracts Egu general
assembly conference abstracts ( 13065).
[
Chanrion
Neubert
Chanrion
Neubert
2010
]
TGF_TEO_Chanrion2010
TGF_TEO_Chanrion2010
Chanrion, O.
Neubert, T.
2010.
Production of runaway electrons by negative streamer
discharges Production of runaway electrons by negative streamer
discharges.
Journal of Geophysical Research: Space
Physics115A6.
[
Dwyer
Dwyer
2003
]
TGF_TEO_Dwyer2003
TGF_TEO_Dwyer2003
Dwyer, J.
2003.
A fundamental limit on electric fields in air A
fundamental limit on electric fields in air.
Geophysical Research Letters3020.
[
Dwyer
Dwyer
2008
]
TGF_TEO_Dwyer2008
TGF_TEO_Dwyer2008
Dwyer, J.
2008.
Source mechanisms of terrestrial gamma-ray flashes
Source mechanisms of terrestrial gamma-ray flashes.
Journal of Geophysical Research:
Atmospheres113D10.
[
Dwyer
Dwyer
2012
]
TGF_TEO_Dwyer2012
TGF_TEO_Dwyer2012
Dwyer, J.
2012.
The relativistic feedback discharge model of terrestrial
gamma ray flashes The relativistic feedback discharge model of terrestrial
gamma ray flashes.
Journal of Geophysical Research: Space
Physics117A2.
[
Dwyer
Dwyer
2010
]
TGF_LIG_Dwyer2010
TGF_LIG_Dwyer2010
Dwyer, JR.
2010.
Diffusion of relativistic runaway electrons and
implications for lightning initiation Diffusion of relativistic runaway
electrons and implications for lightning initiation.
Journal of Geophysical Research: Space
Physics115A3.
[
Dwyer
, Grefenstette
Smith
Dwyer
.
2008
]
TEB_BAT_Dwyer2008
TEB_BAT_Dwyer2008
Dwyer, JR.
, Grefenstette, BW.
Smith, DM.
2008.
High-energy electron beams launched into space by
thunderstorms High-energy electron beams launched into space by
thunderstorms.
Geophysical Research Letters352.
[
Dwyer
.
Dwyer
.
2012
]
TGF_GND_Dwyer2012
TGF_GND_Dwyer2012
Dwyer, JR.
, Schaal, MM.
, Cramer, E.
, Arabshahi, S.
, Liu, N.
, Rassoul, H.
Uman, MA.
2012.
Observation of a gamma-ray flash at ground level in
association with a cloud-to-ground lightning return stroke Observation of a
gamma-ray flash at ground level in association with a cloud-to-ground
lightning return stroke.
Journal of Geophysical Research: Space
Physics117A10.
[
Dwyer
Smith
Dwyer
Smith
2005
]
TGF_RSC_Dwyer2005
TGF_RSC_Dwyer2005
Dwyer, JR.
Smith, DM.
2005.
A comparison between Monte Carlo simulations of runaway
breakdown and terrestrial gamma-ray flash observations A comparison between
monte carlo simulations of runaway breakdown and terrestrial gamma-ray flash
observations.
Geophysical Research Letters3222.
[
Fishman
.
Fishman
.
1994
]
TGF_BAT_Fishman1994
TGF_BAT_Fishman1994
Fishman, GJ.
, Bhat, PN.
, Mallozzi, R.
, Horack, JM.
, Koshut, T.
, Kouveliotou, C.
Christian, HJ.
1994.
Discovery of Intense Gamma-Ray Flashes of Atmospheric
Origin Discovery of intense gamma-ray flashes of atmospheric
origin.
Science26451631313–1316.
[
Grefenstette
, Smith
, Hazelton
Lopez
Grefenstette
.
2009
]
TGF_RHE_Grefenstette2009
TGF_RHE_Grefenstette2009
Grefenstette, BW.
, Smith, DM.
, Hazelton, B.
Lopez, L.
2009.
First RHESSI terrestrial gamma ray flash catalog First
rhessi terrestrial gamma ray flash catalog.
Journal of Geophysical Research: Space
Physics114A2.
[
Guo
.
Guo
.
2020
]
GEC_SIM_Guo2020
GEC_SIM_Guo2020
Guo, D.
, Peng, W.
, Zhu, Y.
, Li, G.
, Liao, J.
, Xiong, S.
others
2020.
Energy response and in-flight background simulationfor
GECAM Energy response and in-flight background simulationfor gecam.
SCIENTIA SINICA Physica, Mechanica &
Astronomica5012129509.
[
Gurevich
, Milikh
Roussel-Dupre
Gurevich
.
1992
]
TGF_TEO_Gurevich1992
TGF_TEO_Gurevich1992
Gurevich, A.
, Milikh, G.
Roussel-Dupre, R.
1992.
Runaway electron mechanism of air breakdown and
preconditioning during a thunderstorm Runaway electron mechanism of air
breakdown and preconditioning during a thunderstorm.
Physics Letters A1655-6463–468.
[
Han
.
Han
.
2020
]
GEC_INS_Han2020
GEC_INS_Han2020
Han, X.
, Zhang, K.
, Huang, J.
, YU, J.
, XIONG, S.
, CHEN, Y.
GENG, H.
2020.
GECAM satellite system design and technological
characteristic Gecam satellite system design and technological
characteristic [Journal Article].
SCIENTIA SINICA Physica, Mechanica &
Astronomica501674-7275129507.
<https://www.sciengine.com/publisher/Science China
Press/journal/SCIENTIA SINICA Physica, Mechanica &
Astronomica/50/12/10.1360/SSPMA-2020-0120>
https://doi.org/10.1360/SSPMA-2020-0120
[
Kochkin
.
Kochkin
.
2019
]
TGF_ASM_Kochkin2019
TGF_ASM_Kochkin2019
Kochkin, P.
, Østgaard, N.
, Neubert, T.
, Victor, R.
, Ullaland, K.
, Yang, S.
Eyles, CJ.
2019.
On multi-pulse TGFs observed by ASIM payload from the
International Space Station. On multi-pulse TGFs observed by ASIM payload
from the International Space Station.
AGU Fall Meeting Abstracts Agu fall meeting
abstracts ( 2019, AE43A-08).
[
Kumar
Zhang
Kumar
Zhang
2015
]
GRB_REV_Zhang2015
GRB_REV_Zhang2015
Kumar, P.
Zhang, B.
2015.
The physics of gamma-ray bursts & relativistic jets
The physics of gamma-ray bursts & relativistic jets.
Physics Reports5611-109.
10.1016/j.physrep.2014.09.008
[
Li
.
Li
.
2022
]
GEC_INS_Li2022
GEC_INS_Li2022
Li, XQ.
, Wen, XY.
, An, ZH.
, C., C.
, Chang, Z.
Chen, G.
2022.
The technology for detection of gamma-ray burst with
GECAM satellite The technology for detection of gamma-ray burst with gecam
satellite.
Radiation Detection Technology and Methods.
10.1007/s41605-021-00288-z
[
Lindanger
.
Lindanger
.
2020
]
TGF_AGL_Lindanger2020
TGF_AGL_Lindanger2020
Lindanger, A.
, Marisaldi, M.
, Maiorana, C.
, Sarria, D.
, Albrechtsen, K.
, Østgaard, N.
others
2020.
The 3rd AGILE terrestrial gamma ray flash catalog. Part
I: Association to lightning sferics The 3rd agile terrestrial gamma ray
flash catalog. part i: Association to lightning sferics.
Journal of Geophysical Research:
Atmospheres12511e31985.
[
Liu
Dwyer
Liu
Dwyer
2013
]
TGF_TEO_Liu2013
TGF_TEO_Liu2013
Liu, N.
Dwyer, JR.
2013.
Modeling terrestrial gamma ray flashes produced by
relativistic feedback discharges Modeling terrestrial gamma ray flashes
produced by relativistic feedback discharges.
Journal of Geophysical Research: Space
Physics11852359–2376.
[
Liu
.
Liu
.
2022
]
TGF_TEO_Liu2022
TGF_TEO_Liu2022
Liu, NY.
, Scholten, O.
, Hare, BM.
, Dwyer, JR.
, Sterpka, CF.
, Kolmašová, I.
Santolík, O.
202203.
LOFAR Observations of Lightning Initial Breakdown
Pulses LOFAR Observations of Lightning Initial Breakdown Pulses.
Geophysical Research Letters496e98073.
10.1029/2022GL098073
[
Liu
.
Liu
.
2021
]
GEC_INS_Liu2021
GEC_INS_Liu2021
Liu, YQ.
, Gong, K.
, Li, XQ.
, Wen, XY.
, An, ZH.
, Cai, C.
others
2021.
The SiPM Array Data Acquisition Algorithm Applied to the
GECAM Satellite Payload The sipm array data acquisition algorithm applied
to the gecam satellite payload.
arXiv preprint arXiv:2112.04786.
[
Lu
.
Lu
.
2010
]
TGF_VLF_Lu2010
TGF_VLF_Lu2010
Lu, G.
, Blakeslee, RJ.
, Li, J.
, Smith, DM.
, Shao, XM.
, McCaul, EW.
Cummer, SA.
2010.
Lightning mapping observation of a terrestrial gamma-ray
flash Lightning mapping observation of a terrestrial gamma-ray
flash.
Geophysical Research Letters3711.
[
Lu
.
Lu
.
2011
]
TGF_VLF_Lu2011
TGF_VLF_Lu2011
Lu, G.
, Cummer, SA.
, Li, J.
, Han, F.
, Smith, DM.
Grefenstette, BW.
2011.
Characteristics of broadband lightning emissions
associated with terrestrial gamma ray flashes Characteristics of broadband
lightning emissions associated with terrestrial gamma ray flashes.
Journal of Geophysical Research: Space
Physics116A3.
[
Lyu
, Cummer
, Lu
, Zhou
Weinert
Lyu
.
2016
]
TGF_LED_Lyu2016
TGF_LED_Lyu2016
Lyu, F.
, Cummer, SA.
, Lu, G.
, Zhou, X.
Weinert, J.
2016.
Imaging lightning intracloud initial stepped leaders by
low-frequency interferometric lightning mapping array Imaging lightning
intracloud initial stepped leaders by low-frequency interferometric lightning
mapping array.
Geophysical Research Letters43105516–5523.
[
Mailyan
.
Mailyan
.
2020
]
TGF_GBM_Mailyan2020
TGF_GBM_Mailyan2020
Mailyan, BG.
, Nag, A.
, Dwyer, JR.
, Said, RK.
, Briggs, MS.
, Roberts, OJ.
Rassoul, HK.
202004.
Gamma-Ray and Radio-Frequency Radiation from
Thunderstorms Observed from Space and Ground Gamma-Ray and
Radio-Frequency Radiation from Thunderstorms Observed from Space and
Ground.
Scientific Reports107286.
10.1038/s41598-020-63437-2
[
Maiorana
.
Maiorana
.
2020
]
TGF_AGL_Maiorana2020
TGF_AGL_Maiorana2020
Maiorana, C.
, Marisaldi, M.
, Lindanger, A.
, Østgaard, N.
, Ursi, A.
, Sarria, D.
others
2020.
The 3rd AGILE terrestrial gamma-ray flashes catalog.
Part II: Optimized selection criteria and characteristics of the new sample
The 3rd agile terrestrial gamma-ray flashes catalog. part ii: Optimized
selection criteria and characteristics of the new sample.
Journal of Geophysical Research:
Atmospheres12511e31986.
[
Marisaldi
.
Marisaldi
.
2010
]
TGF_AGL_Marisaldi2010
TGF_AGL_Marisaldi2010
Marisaldi, M.
, Fuschino, F.
, Labanti, C.
, Galli, M.
, Longo, F.
, Del Monte, E.
others
2010.
Detection of terrestrial gamma ray flashes up to 40 MeV
by the AGILE satellite Detection of terrestrial gamma ray flashes up to 40
mev by the agile satellite.
Journal of Geophysical Research: Space
Physics115A3.
[
Marisaldi
.
Marisaldi
.
2019
]
TGF_AGL_Marisaldi2019
TGF_AGL_Marisaldi2019
Marisaldi, M.
, Galli, M.
, Labanti, C.
, Østgaard, N.
, Sarria, D.
, Cummer, S.
others
2019.
On the high-energy spectral component and fine time
structure of terrestrial gamma ray flashes On the high-energy spectral
component and fine time structure of terrestrial gamma ray flashes.
Journal of Geophysical Research:
Atmospheres124147484–7497.
[
Moss
, Pasko
, Liu
Veronis
Moss
.
2006
]
TGF_TEO_Moss2006
TGF_TEO_Moss2006
Moss, GD.
, Pasko, VP.
, Liu, N.
Veronis, G.
2006.
Monte Carlo model for analysis of thermal runaway
electrons in streamer tips in transient luminous events and streamer zones of
lightning leaders Monte carlo model for analysis of thermal runaway
electrons in streamer tips in transient luminous events and streamer zones of
lightning leaders.
Journal of Geophysical Research: Space
Physics111A2.
[
Østgaard
.
Østgaard
.
2019
]
TGF_ASM_Ostgaard2019
TGF_ASM_Ostgaard2019
Østgaard, N.
, Neubert, T.
, Reglero, V.
, Ullaland, K.
, Yang, S.
, Genov, G.
others
2019.
First 10 months of TGF observations by ASIM First 10
months of tgf observations by asim.
Journal of Geophysical Research:
Atmospheres1242414024–14036.
[
Poelman
, Schulz
Vergeiner
Poelman
.
2013
]
TGF_GLD_Poelman2013
TGF_GLD_Poelman2013
Poelman, DR.
, Schulz, W.
Vergeiner, C.
2013.
Performance characteristics of distinct lightning
detection networks covering Belgium Performance characteristics of distinct
lightning detection networks covering belgium.
Journal of Atmospheric and Oceanic
Technology305942–951.
[
Pohjola
Mäkelä
Pohjola
Mäkelä
2013
]
TGF_GLD_Pohjola2013
TGF_GLD_Pohjola2013
Pohjola, H.
Mäkelä, A.
2013.
The comparison of GLD360 and EUCLID lightning location
systems in Europe The comparison of gld360 and euclid lightning location
systems in europe.
Atmospheric research123117–128.
[
Roberts
.
Roberts
.
2018
]
TGF_GBM_Roberts2018
TGF_GBM_Roberts2018
Roberts, OJ.
, Fitzpatrick, G.
, Stanbro, M.
, McBreen, S.
, Briggs, MS.
, Holzworth, RH.
Mailyan, BG.
2018.
The First Fermi-GBM Terrestrial Gamma Ray Flash Catalog
The first fermi-gbm terrestrial gamma ray flash catalog.
Journal of Geophysical Research: Space
Physics12354381-4401.
<https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2017JA024837>
https://doi.org/10.1029/2017JA024837
[
Said
, Cohen
Inan
Said
.
2013
]
TGF_GLD_Said2013
TGF_GLD_Said2013
Said, R.
, Cohen, M.
Inan, U.
2013.
Highly intense lightning over the oceans: Estimated peak
currents from global GLD360 observations Highly intense lightning over the
oceans: Estimated peak currents from global gld360 observations.
Journal of Geophysical Research:
Atmospheres118136905–6915.
[
Sarria
.
Sarria
.
2019
]
TEB_ASM_Sarria2019
TEB_ASM_Sarria2019
Sarria, D.
, Kochkin, P.
, Østgaard, N.
, Lehtinen, N.
, Mezentsev, A.
, Marisaldi, M.
others
2019.
The first terrestrial electron beam observed by the
atmosphere-space interactions monitor The first terrestrial electron beam
observed by the atmosphere-space interactions monitor.
Journal of Geophysical Research: Space
Physics1241210497–10511.
[
Scargle
, Norris
, Jackson
Chiang
Scargle
.
2013
]
STAT_BayesianBlock
STAT_BayesianBlock
Scargle, JD.
, Norris, JP.
, Jackson, B.
Chiang, J.
2013feb.
Studies In Astronomical Time Series Analysis. Vi.
Bayesian Block Representations Studies in astronomical time series
analysis. vi. bayesian block representations.
The Astrophysical Journal7642167.
<https://doi.org/10.1088/0004-637x/764/2/167>
10.1088/0004-637x/764/2/167
[
Skeltved
, Østgaard
, Mezentsev
, Lehtinen
Carlson
Skeltved
.
2017
]
TGF_TEO_Skeltved2017
TGF_TEO_Skeltved2017
Skeltved, AB.
, Østgaard, N.
, Mezentsev, A.
, Lehtinen, N.
Carlson, B.
2017.
Constraints to do realistic modeling of the electric
field ahead of the tip of a lightning leader Constraints to do realistic
modeling of the electric field ahead of the tip of a lightning
leader.
Journal of Geophysical Research:
Atmospheres122158120–8134.
[
Stolzenburg
, Marshall
, Karunarathne
Orville
Stolzenburg
.
2016
]
TGF_LIG_Stolzenburg2016
TGF_LIG_Stolzenburg2016
Stolzenburg, M.
, Marshall, TC.
, Karunarathne, S.
Orville, RE.
2016.
Luminosity with intracloud-type lightning initial
breakdown pulses and terrestrial gamma-ray flash candidates Luminosity with
intracloud-type lightning initial breakdown pulses and terrestrial gamma-ray
flash candidates.
Journal of Geophysical Research:
Atmospheres1211810–919.
[
Ursi
, Guidorzi
, Marisaldi
, Sarria
Frontera
Ursi
.
2017
]
TGF_SAX_Ursi2017
TGF_SAX_Ursi2017
Ursi, A.
, Guidorzi, C.
, Marisaldi, M.
, Sarria, D.
Frontera, F.
2017.
Terrestrial gamma-ray flashes in the BeppoSAX data
archive Terrestrial gamma-ray flashes in the bepposax data archive.
Journal of Atmospheric and Solar-Terrestrial
Physics15650–56.
[
Vaisala
Vaisala
2022
]
YZ_TGF_GEC_SupDat_02
YZ_TGF_GEC_SupDat_02
Vaisala. Vaisala.
2022.
Vaisala. (2022). Global lightning detection network GLD360 data
[Dataset]. Vaisala. Retrieved from
<https://www.vaisala.com/en/products/systems/lightning/gld360>
[
Wada
.
Wada
.
2019
]
TGF_GND_Wada2019
TGF_GND_Wada2019
Wada, Y.
, Enoto, T.
, Nakamura, Y.
, Furuta, Y.
, Yuasa, T.
, Nakazawa, K.
Tsuchiya, H.
201906.
Gamma-ray glow preceding downward terrestrial gamma-ray
flash Gamma-ray glow preceding downward terrestrial gamma-ray
flash.
Communications Physics2167.
10.1038/s42005-019-0168-y
[
Wilson
Wilson
1925
]
TGF_TEO_Wilson1925
TGF_TEO_Wilson1925
Wilson, CT.
1925.
The acceleration of β-particles in strong electric fields
such as those of thunderclouds The acceleration of β-particles in
strong electric fields such as those of thunderclouds ( 22).
[
Wu
.
Wu
.
2015
]
TGF_TEO_Wu2015
TGF_TEO_Wu2015
Wu, T.
, Yoshida, S.
, Akiyama, Y.
, Stock, M.
, Ushio, T.
Kawasaki, Z.
201509.
Preliminary breakdown of intracloud lightning:
Initiation altitude, propagation speed, pulse train characteristics, and step
length estimation Preliminary breakdown of intracloud lightning:
Initiation altitude, propagation speed, pulse train characteristics, and step
length estimation.
Journal of Geophysical Research:
Atmospheres120189071-9086.
10.1002/2015JD023546
[
Xiao
.
Xiao
.
2022
]
GEC_CAL_Xiao2022
GEC_CAL_Xiao2022
Xiao, S.
, Liu, Y.
, Peng, W.
, An, Z.
, Xiong, S.
, Tuo, Y.
others
2022.
On-ground and on-orbit time calibrations of GECAM
On-ground and on-orbit time calibrations of gecam.
Monthly Notices of the Royal Astronomical
Society5111964–971.
[
Xiong
.
Xiong
.
2012
]
TEB_GBM_Xiong2012
TEB_GBM_Xiong2012
Xiong, S.
, Briggs, M.
, Connaughton, V.
, Fishman, G.
, Tierney, D.
, Fitzpatrick, G.
Hutchins, M.
2012.
Location prediction of electron TGFs Location
prediction of electron tgfs.
Journal of Geophysical Research: Space
Physics117A2.
[
Xu
, Celestin
Pasko
Xu
.
2012
]
TGF_RSC_Xu2012
TGF_RSC_Xu2012
Xu, W.
, Celestin, S.
Pasko, VP.
2012.
Source altitudes of terrestrial gamma-ray flashes
produced by lightning leaders Source altitudes of terrestrial gamma-ray
flashes produced by lightning leaders.
Geophysical Research Letters398.
[
Xu
, Celestin
Pasko
Xu
.
2015
]
TGF_RSC_Xu2015
TGF_RSC_Xu2015
Xu, W.
, Celestin, S.
Pasko, VP.
2015.
Optical emissions associated with terrestrial gamma ray
flashes Optical emissions associated with terrestrial gamma ray
flashes.
Journal of Geophysical Research: Space
Physics12021355–1370.
[
Xu
, Celestin
, Pasko
Marshall
Xu
.
2019
]
TGF_RSC_Xu2019
TGF_RSC_Xu2019
Xu, W.
, Celestin, S.
, Pasko, VP.
Marshall, RA.
201908.
Compton Scattering Effects on the Spectral and Temporal
Properties of Terrestrial Gamma-Ray Flashes Compton Scattering Effects on
the Spectral and Temporal Properties of Terrestrial Gamma-Ray
Flashes.
Journal of Geophysical Research: Space
Physics12487220-7230.
10.1029/2019JA026941
[
Xu
.
Xu
.
2022
]
GEC_INS_Xv2021
GEC_INS_Xv2021
Xu, YB.
, Li, XQ.
, Sun, XL.
, Yang, S.
, Wang, H.
, Peng, WX.
Zhou, X.
2022.
The design and performance of charged particle detector
onboard the GECAM mission The design and performance of charged particle
detector onboard the gecam mission [Journal Article].
Radiation Detection Technology and
Methods6153-62.
<https://doi.org/10.1007/s41605-021-00298-x>
10.1007/s41605-021-00298-x
[
Zhang
.
Zhang
.
2022
]
GEC_INS_Zhang2022a
GEC_INS_Zhang2022a
Zhang, D.
, Li, X.
, Wen, X.
, Xiong, S.
, An, Z.
, Xu, Y.
others
2022.
Dedicated SiPM array for GRD of GECAM Dedicated sipm
array for grd of gecam.
Radiation Detection Technology and
Methods6163–69.
[
Zhao
.
Zhao
.
2021
]
GEC_SFW_Yun2021
GEC_SFW_Yun2021
Zhao, XY.
, Xiong, SL.
, Wen, XY.
, Li, XQ.
, Cai, C.
, Xiao, S.
Luo, Q.
202112.
The In-Flight Realtime Trigger and Localization
Software of GECAM The In-Flight Realtime Trigger and Localization
Software of GECAM.
arXiv e-printsarXiv:2112.05101.
[
Zhao
Zhao
2023
]
YZ_TGF_GEC_SupDat_03
YZ_TGF_GEC_SupDat_03
Zhao, Y.
2023.
Supporting Information for "The First GECAM Observation Results
on Terrestrial Gamma-ray Flashes and Terrestrial Electron Beams".
Supporting information for "the first gecam observation results on
terrestrial gamma-ray flashes and terrestrial electron beams".
[Dataset]. Retrieved from
<https://doi.org/10.5281/zenodo.8028217>
[
Zhao
.
Zhao
.
2023a
]
YZ_LOC_MTD
YZ_LOC_MTD
Zhao, Y.
, Xue, WC.
, Xiong, SL.
, Luo, Q.
, Zhang, YQ.
, Yu, H.
Zhang, SN.
2023a.
On the Localization Methods of High Energy Transients
for All-Sky Gamma-Ray Monitors On the Localization Methods of High
Energy Transients for All-Sky Gamma-Ray Monitors.
arXiv e-printsarXiv:2209.13088.
[
Zhao
.
Zhao
.
2023b
]
YZ_LOC_GEC
YZ_LOC_GEC
Zhao, Y.
, Xue, WC.
, Xiong, SL.
, Wang, YH.
, Liu, JC.
, Luo, Q.
Yu, H.
2023b.
GECAM Localization of High-energy Transients and the
Systematic Error Gecam localization of high-energy transients and the
systematic error.
The Astrophysical Journal Supplement
Series265117.
<https://dx.doi.org/10.3847/1538-4365/acafeb>
10.3847/1538-4365/acafeb
|
http://arxiv.org/abs/2306.11630v1
|
20230620160037
|
Giant effective magnetic moments of chiral phonons from orbit-lattice coupling
|
[
"Swati Chaudhary",
"Dominik M. Juraschek",
"Martin Rodriguez-Vega",
"Gregory A. Fiete"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
[email protected]
Department of Physics, The University of Texas at Austin, Austin, Texas 78712, USA
Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA
Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
School of Physics and Astronomy, Tel Aviv University, Tel Aviv 6997801, Israel
Department of Physics, The University of Texas at Austin, Austin, Texas 78712, USA
Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA
Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA
Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Circularly polarized lattice vibrations carry angular momentum and lead to magnetic responses in applied magnetic fields or when resonantly driven with ultrashort laser pulses. Recent measurements have found responses that are orders of magnitude larger than those calculated in prior theoretical studies.
Here, we present a microscopic model for the effective magnetic moments of chiral phonons in magnetic materials that is able to reproduce the experimentally measured magnitudes and that allows us to make quantitative predictions for materials with giant magnetic responses using microscopic parameters. Our model is based on orbit-lattice couplings that hybridize optical phonons with orbital electronic transitions.
We apply our model to two types of materials: 4f rare-earth halide paramagnets and 3d transition-metal oxide magnets. In both cases, we find that chiral phonons can carry giant effective magnetic moments of the order of a Bohr magneton, orders of magnitude larger than previous predictions.
Giant effective magnetic moments of chiral phonons from orbit-lattice coupling
Gregory A. Fiete
July 31, 2023
==============================================================================
§ INTRODUCTION
In circularly polarized lattice vibrations (chiral phonons), the atoms move on closed orbits and can therefore carry angular momentum <cit.>. In an ionic lattice, the orbital motions of the ions create atomistic circular currents and therefore produce a collective phonon magnetic moment <cit.>. These phonon magnetic moments lead to different magnetic responses when an external magnetic field is applied or when they are resonantly excited with an ultrashort terahertz pulse. In an applied static magnetic field, the frequencies between right- and left-handed circular polarizations split up in a phonon Zeeman effect <cit.> and phonons with opposite chirality are deflected in different directions when propagating in a phonon Hall effect <cit.>. In turn, when the chiral phonons are infrared-active, they can be resonantly excited with an ultrashort terahertz pulse, which generates a macroscopic phonon magnetic moment and therefore effective magnetic field in a phonon inverse Faraday or phonon Barnett effect <cit.>.
The phonon magnetic moment produced by an ionic charge current scales with the gyromagnetic ratio of the ions, γ=Z^*/(2M), which depend on the effective charge, Z^*, and the ionic mass, M. Previous studies based on density functional theory have computed phonon magnetic moments in various nonmagnetic materials to yield up to a nuclear magneton <cit.>. Intriguingly, a number of early and recent experiments have measured magnitudes of phonon Zeeman effects in paramagnets <cit.> and in materials with non-trivial quantum geometry in electronic bands <cit.> that suggest the presence of phonon magnetic moments ranging from fractions to a few Bohr magnetons, orders of magnitude larger than the nuclear magneton. Furthermore, very recent pump-probe experiments have shown that coherently driven chiral phonons produce effective magnetizations in nonmagnetic materials <cit.> and paramagnets <cit.> that are compatible with phonon magnetic moments on the Bohr magneton scale. These findings indicate an effective contribution to the phonon magnetic moment arising from electron-phonon or spin-phonon coupling.
Theories based on electron-phonon coupling have so far involved two types of explanations for materials with non-trivial electronic band topology: First, an adiabatic evolution of the electronic states alongside the circularly polarized phonon modes that induces an adiabatic electronic orbital magnetization <cit.>, and second, a coupling of the cyclotron motion of electrons close to a Dirac point to the chiral phonon mode <cit.>. Theories based on spin-phonon coupling have focused both on nonmagnetic and paramagnetic materials: In paramagnets, phonon modes can couple to the electron spin through modifications of the crystal electric field (CEF) and subsequently through the spin-orbit coupling <cit.>. For nonmagnetic materials, a recent proposal suggests that the spin channels of doping-induced conduction electrons couple to the phonon magnetic moment, resulting in phonon-induced electronic polarization <cit.>. Despite these developments, a microscopic theory that can quantitatively predict the experimentally found giant effective magnetic moments of phonons is still missing.
In this study, we develop a microscopic model for effective phonon magnetic moments in paramagnetic and magnetic materials. Our model is based on orbit-lattice coupling, where chiral phonon modes induce transitions between different orbital states, similar to the Raman mechanism of spin relaxation <cit.>. We perform a comprehensive group-theoretical analysis to identify the possible couplings between chiral phonon modes and orbital transitions and apply it to two distinct cases: First, we apply our model to the well-known case of 4f paramagnetic rare-earth trihalides, in which the spin-orbit coupling is much larger than the CEF splitting. In these materials, a CEF excitation hybridizes with circularly polarized phonons, which allows them to obtain a large phonon magnetic moment. Here, our model is, for the first time, able to quantitatively predict the giant phonon Zeeman splittings that were measured already half a century ago <cit.>, using only microscopic parameters in combination with results from first-principles calculations. Second, our model predicts that a similar phonon Zeeman effect can also occur in d-orbital magnets, where the CEF splitting of the e_g and t_2g orbits is much larger than the spin-orbit coupling. In d-orbital systems, the hybridization occurs between orbital excitations connecting multiplets split by spin-orbit coupling or lattice distortions and circularly polarized phonons. We predict that this mechanism can lead to large effective phonon magnetic moments and, therefore, phonon Zeeman splittings when the energies of the orbital transitions and the phonons become comparable.
The manuscript is organized as follows. In Sec. <ref>, we develop a microscopic model for the coupling of doubly degenerate modes to orbital transitions and show that it leads to a splitting into circularly polarized phonons with opposite chirality in an applied magnetic field. We furthermore derive an expression for the effective magnetic moment that the chiral phonons obtain through this coupling and show that it naturally leads to the phenomenological expression for the phonon Zeeman splitting in the limit of small magnetic fields. In Sec. <ref>, we apply our model to the 4f paramagnet CeCl_3 and predict the effective phonon magnetic moment and the phonon Zeeman splitting for its doubly degenerate chiral phonon modes. In Sec. <ref>, we apply our model to 3d magnets and predict the effective phonon magnetic moment and phonon Zeeman splitting for the example of the transition-metal oxide CoTiO_3 in both its paramagnetic and antiferromagnetic phases. In Sec. <ref>, we conclude with a discussion of the results.
§ PHONON ZEEMAN SPLITTING AND EFFECTIVE MAGNETIC MOMENTS
In this section, we discuss a detailed microscopic model for the Zeeman splitting of doubly degenerate zone-center phonon modes.
This splitting arises from the hybridization of doubly degenerate phonon modes with orbital excitations on a magnetic ion when the degeneracy of Kramer pairs is lifted. The new phonon modes have finite and opposite chirality decided by the sign of the time-reversal symmetry breaking term. This model is based on the early work of Ref. <cit.> and can be described by Hamiltonian:
H=H_el+H_ph+H_el-ph.
Here, H_el is the electronic Hamiltonian for the magnetic ion, H_ph is the phonon Hamiltonian for a degenerate phonon mode, and H_el-ph is the electron-phonon coupling term. We consider a magnetic ion with doubly degenerate electronic levels (Kramers doublets), which are eigenstates of the total electronic angular momentum J, and which are split in energy due to CEF or spin-orbit coupling. The electronic Hamiltonian can therefore be written as
H_el=∑_iε_i |ψ_i⟩⟨ψ_i|,
where ε_i is the energy of state i. These states are given by the ground-state and excited-state Kramers doublets, and we will look at two Kramers doublets at a time. We denote the states of the ground-state doublet by
|ψ_1> =|J=J_α,m_j=m_j^α⟩,
|ψ_2> =|J=J_α,m_j=-m_j^α⟩,
and those of the excited-state doublet by
|ψ_3> =|J=J_β,m_j=m_j^β⟩,
|ψ_4> =|J=J_β,m_j=-m_j^β⟩.
In the absence of a magnetic field, ε_1=ε_2 and ε_3=ε_4.
For the phonon part, we focus on one doubly degenerate phonon mode at a time, where the Hamiltonian containing both orthogonal components, a and b, is given by
H_ph=ω_0 (a^† a+ b^† b).
where ω_0 is the frequency of the two components of the doubly degenerate phonon mode and we set ħ=1. Here, a^†, b^† (a, b) are the bosonic creation (annihilation) operators of the two orthogonal components. These phonons can interact with the electronic states of the magnetic ion and mix different Kramers doublets due to orbit-lattice coupling. On a microscopic level, this orbit-lattice coupling originates from the modification of the crystal electric field by the lattice vibrations. It can be obtained by expanding the crystal field to first order in the atomic displacements along the eigenvectors of a phonon mode
and can be written as
H_el-ph=∑_Γ_α Q_Γ_αÔ_Γ_α
where Ô_Γ_α is an operator acting on electronic states and Q_Γ_α is the displacement associated with phonon mode α with irreducible representation Γ. For CEF excitations, where both doublets belong to the same multiplet, these operators take the form of Steven's operators <cit.> but here we keep it more general to include the possibility of spin-orbit excitations. In our microscopic model, these operators are computed from the changes of the Coulomb potential around the magnetic ion, where we treat all ions as point charges. We ignore higher-order corrections in the lattice displacement, which would lead to higher-order scattering processes that are not considered here.
Within this expansion, the electron-phonon coupling term can be written as
H_el-ph=(a^†+a)Ô_a+(b^†+b)Ô_b,
where Ô_a/b can couple different electronic states. The form of these operators is determined by time-reversal symmetry and can be expressed as
Ô_a =g_a|ψ_1⟩⟨ψ_3|-g_a^*|ψ_2⟩⟨ψ_4|+h.c.,
Ô_b =g_b|ψ_1⟩⟨ψ_3|-g_b^*|ψ_2⟩⟨ψ_4|+h.c.
where the value of g_a/b depends on the strength of the orbit-lattice coupling. These couplings are illustrated in Fig. <ref>(a).
There are further coupling terms that mix the states ψ_1 (ψ_2) with ψ_4 (ψ_3), which in most cases turn out to be zero, however. Heuristically, this can be understood on the basis of angular momentum transfer. The chiral superposition of two components of a doubly degenerate mode possesses angular momentum ± lħ and thus only mixes electronic states for which the change in angular momentum is given by |Δ m_j|=l. This restricts the number of terms in Eq. (<ref>), and it thus suffices to take into consideration only the transitions between |ψ_1⟩(|ψ_2⟩) and |ψ_3⟩(|ψ_4⟩). We will discuss the particular form of this mixing for specific examples in Secs. <ref> and <ref>.
The electron-phonon interaction, therefore, manifests as orbit-lattice coupling and hybridizes phonons and electronic excitations, which modifies the phonon frequencies. The modification of the phonon spectrum can be obtained using a Green's-function formalism. For the non-interacting case, described by H_ph, the bare phonon Green's function is given by
𝐃_0(ω)=[ D_0^aa(ω) 0; 0 D_0^bb(ω) ],
where
D_0^aa(ω)=D_0^bb(ω)=2ω_0/ω^2-ω_0^2,
and the phonon energies are trivially retrieved by solving Det(𝐃_0^-1(ω))=0. Including interactions, the full phonon Green's function is given by
𝐃^-1(ω) =𝐃_0^-1(ω)-Π(ω),
where the phonon self-energy matrix Π(ω) contains corrections from the orbit-lattice coupling that are given by
Π^aa =4π |g_a|^2( f_13ε_13/ω^2-ε_31^2+ f_24ε_24/ω^2-ε_42^2),
Π^bb =4π |g_b|^2( f_13ε_13/ω^2-ε_31^2+ f_24ε_24/ω^2-ε_42^2),
Π^ab =(Π^ba)^*=Π^ab_Re+i Π^ab_Im.
The real and imaginary parts of the mixed term, Π^ab, are given by
Π^ab_Re =4πRe(g_ag_b^*)( f_13ε_13/ω^2-ε_31^2+ f_24ε_24/ω^2-ε_42^2),
Π^ab_Im =4πIm(g_ag_b^*)( f_13ω/ω^2-ε_31^2- f_24ω/ω^2-ε_42^2),
Π^ab =2π(g_ag_b^* f_13/ω-ε_13-g_a^*g_b f_13/ω-ε_31+g_a^*g_b f_24/ω-ε_24-g_ag_b^* f_24/ω-ε_42).
Here, ε_ij=ε_i-ε_j is the energy difference between the states i and j and f_ij=f_i-f_j is the difference in their occupancies, which are determined by thermal populations of these levels.
All of the terms in the self-energy matrix introduce corrections to phonon energies. The phonon degeneracy can be lifted if the two diagonal terms are unequal. Eq. (<ref>) therefore becomes
𝐃^-1(ω)=[ ω^2-ω_0^2/2ω_0-g̃^2(f_1Δ_1/ω^2-Δ_1^2+f_2Δ_2/ω^2-Δ_2^2) ig̃^2(-f_1ω/ω^2-Δ_1^2+f_2ω/ω^2-Δ_2^2); -ig̃^2(-f_1ω/ω^2-Δ_1^2+f_2ω/ω^2-Δ_2^2) ω^2-ω_0^2/2ω_0-g̃^2(f_1Δ_1/ω^2-Δ_1^2+f_2Δ_2/ω^2-Δ_2^2) ],
where we redefine g̃^2≡4π g^2, Δ_1≡ε_31, Δ_2≡ε_42, and we assume the excited-state Kramers doublet to be unoccupied, f_3=f_4=0. Please see the Appendix for a detailed derivation of Eq. (<ref>). The off-diagonal elements arise from the orbital transitions shown in Fig. <ref>(a) and can be understood in terms of the Feynman diagrams shown in Fig. <ref>.
With no external magnetic field applied, the occupancies of states 1 and 2 are equal, f_1=f_2≡ f_0/2, ( f_0 is the occupancy of the ground-state manifold) as well as the energies of both transitions, 1→ 3 and 2→ 4, Δ_1=Δ_2≡Δ.
Under these conditions, the two contributions shown in the two panels of Fig. <ref>(a) are equal in magnitude and opposite in sign, and thus the off-diagonal terms of the self-energy matrix vanish. As a result, phonons and electronic excitations hybridize to form doubly degenerate states with primarily electronic character and lower energies and states with primarily phononic character and higher energies, as illustrated in Fig. <ref>(b) for B=0. We solve Det(𝐃^-1(B=0))=0 to obtain the frequencies of the hybridized states with primarily phononic and electronic character, Ω_ph and Ω_el,
Ω_ph ≡ω_ph(B=0)
=(ω_0^2+Δ^2/2+√((ω_0^2-Δ^2/2)^2+2g̃^2f_0ω_0Δ))^1/2
Ω_el ≡ω_el(B=0)
= (ω_0^2+Δ^2/2-√((ω_0^2-Δ^2/2)^2+2g̃^2f_0ω_0Δ))^1/2.
The energy levels are depicted schematically in Fig. <ref>.
We next apply an external magnetic field, 𝐁=B ẑ, which lifts the degeneracies of the Kramers doublets, ε_12≠ 0 and ε_34≠ 0. This subsequently modifies the electronic transition energies as
Δ_1=Δ-γ B, Δ_2=Δ+γ B,
where γ=g^el_ex-g^el_gs contains the g-factors of the ground- and excited-state doublets. Lifting the degeneracy of the ground-state doublet leads to an asymmetric population of the ground-state energy levels, f_12≠ 0. This population difference is an odd function of the magnetic field B, and we will show in the following that it is directly proportional to the magnetization of the system.
The energies of phonon and electronic excitation branch can be obtained by solving Det(𝐃^-1(B≠ 0))=0, which yields
(ω^2-Ω_ph^2)(ω^2-Ω_el^2)
± 2 ω(γ B(ω^2-ω^2_0)+g̃^2ω_0 f_21)
+γ B(γ B (ω^2-ω^2_0)+2g̃^2ω_0f_21)=0.
For small magnetic fields, we can assume a solution of the form
w_ph^±=Ω_ph(1∓η)
and by substituting it in Eq. (<ref>), we get
Ω_phη=γ B(Ω_ph^2-ω^2_0)+g̃^2ω_0 f_21/Ω_ph^2-Ω_el^2+γ^2B^2
Please see the Appendix for a detailed derivation. Consequentially, we obtain an expression for the splitting of the phonon frequencies,
ω_ph^+-ω_ph^-/Ω_ph=2γ B(Ω_ph^2-ω^2_0)/ω_0+g̃^2 f_21/√((ω_0^2-Δ^2)^2+8g̃^2f_0ω_0Δ)+γ^2 B^2.
The complex hybridization of energy levels leading to this splitting is depicted in Fig. <ref>(b), and it arises from a combination of two factors: 1) a Zeeman shift of the electronic energy levels that is determined by the g-factor of the Kramers states in each manifold, and 2) a population imbalance between the ground-state energy levels that is directly related to a change in spin polarization (and subsequently magnetization) of the ion.
The net spin polarization of the ground state of the system depends on magnetic field B, temperature T, and also on the exchange interactions in the system. We will derive an explicit form for the population asymmetry in Secs. <ref> and <ref>, when considering the examples of paramagnets and magnets. For example, the population difference for the paramagnetic case is simply given by f_21=tanh(g^el_gsB/(k_B T)). On the other hand, for ferromagnetic or antiferromagnetic cases, its form is more complicated and can be derived by adding the exchange mean field. In all cases, in the limit B→ 0, we can write the population difference as linear in the magnetic field,
f_21≈χ B,
where χ is directly related to the magnetic susceptibility of the system.
As a result, for a small magnetic field, both the terms in Eq. (<ref>) lead to Zeeman splitting of previously doubly degenerate phonon mode. We notice that the splitting becomes more pronounced as the non-interacting phonon energy, ω_0 comes closer to electronic excitation energy Δ. Here, we can consider two different scenarios:
* Resonant case: Δ≈ω_0 such that |Δ-ω_0|≪g̃. In this case, the relative splitting
ω_ph^+-ω_ph^-/Ω_ph≈γ+g̃/√(2)χ/ω_0 B
which depends linearly on the orbit-lattice coupling strength g̃.
* Off-resonant case: |Δ-ω_0|≫g̃. In this case, Ω_ph^2-ω_0^2≈g̃^2ω_0/|Δ-ω_0|, and thus
the relative splitting is
ω_ph^+-ω_ph^-/Ω_ph(B=0)≈g̃^2γ/|Δ-ω_0|+χ/(ω_0-Δ)(ω_0+Δ_0) B
where the splitting depends quadratically on orbit-lattice strength g̃.
Due to the general mismatch of CEF transition frequencies and phonon frequencies, the off-resonant scenario can be more commonly found in materials. In the off-resonant case, the splitting diminishes as the energy difference between phonon and electronic excitations increases, which requires ω_0 and Δ to be at least of similar order of magnitude to yield a significant effect.
So far, we discussed how the phonon energies are shifted but we have not investigated the consequences of the orbit-lattice coupling on the displacements associated with the eigenmodes. Assuming the off-resonant scenario, the phonon-displacement operators corresponding to the split phonon modes with frequencies ω_ph^+ and ω_ph^- of the interacting Green's function matrix are given by
Q_+ =1/√(2)(Q_a-iQ_b),
Q_- =1/√(2)(Q_a+iQ_b),
where Q_a∼(a+a^†) and Q_b∼(b+b^†) were the displacement operators corresponding to linearly polarized phonon modes a and b. This shows that the new modes, Q_+ and Q_-, correspond to circular superpositions of the two orthogonal components and have opposite chiralities. We would like to emphasize at this point that these zone-centered phonon modes here become chiral due to time-reversal breaking. On the other hand, inversion symmetry breaking can allow chiral phonons at other high-symmetry points in the Brilliouin zone, as studied in many two-dimensional hexagonal lattices <cit.>.
These chiral phonons exhibit Zeeman splitting as their energies change with an applied magnetic field. In the limit B→0, this splitting becomes linear in the magnetic field, which allows us to attribute an effective magnetic moment to the chiral phonons. We denote the effective magnetic moment of chiral phonons by μ_ph, and the splitting is accordingly given by
ω_ph^±=ω_0±μ_phB.
whereas the phonon magnetic moment can be expressed as
μ_ph=1/2∂(ω_ph^+-ω_ph^-)/∂ B|_B→ 0,
which can be evaluated by using Eq. (<ref>).
The magnitude of phonon Zeeman splitting obtained here relies on the coupling between orbital excitations and phonons. In order to have strong coupling between electronic and phonon degrees of freedom, the energies of these excitations should be of the same order of magnitude as the phonon energies. In the following sections, we will apply this model to rare-earth trihalide paramagnets and transition-metal oxide magnets to predict the Zeeman splitting and effective magnetic moments of chiral phonons in these materials.
§ CHIRAL PHONONS IN 4F PARAMAGNETS
The splitting of optical phonons in paramagnetic rare-earth compounds was extensively studied in the 1970s in a series of papers <cit.>. Amongst other compounds, it was shown that the rare-earth trihalide CeCl_3 exhibits a large splitting of doubly degenerate phonon modes in an external magnetic field. Recently, using the early experimental data on orbit-lattice coupling, it was predicted in Ref. <cit.>, that chiral phonons in this material can produce effective magnetic fields on the order of tens of tesla when coherently excited with ultrashort laser pulses. Subsequently, CeCl_3 has emerged as an interesting candidate to study magneto-phononic and phono-magnetic properties of chiral phonons. We will determine the microscopic origin of the orbit-lattice coupling and apply the model derived in the previous section to predict the Zeeman splittings and effective magnetic moments in this material. We stress that this is the first quantitative prediction using only microscopic parameters and ab-initio results, without the need for phenomenological theory or experimental data.
§.§ Structural and electronic properties of CeCl_3
The rare-earth trihalide CeCl_3 (Fig. <ref>(a)) belongs to space group no. 176 (point group 6/m) and its primitive unit cell contains eight atoms, two Ce^3+ ions located at the 2c Wyckoff positions (shown as Ce^3+_A and Ce^3+_B) and six Cl^- ions at the 6h Wyckoff positions (shown as Cl^-_1A,Cl^-_2A,Cl^-_3A,Cl^-_1B,Cl^-_2B,Cl^-_3B). The eight-atom unit cell leads to 21 optical phonons consisting of irreducible representations 2A_g+ 1A_u+2B_g+2B_u+1E_1g+3E_2g+2E_1u+1E_2u <cit.>.
Each Ce^3+ ion has nine nearest neighbors arranged in three different planes as shown in Fig. <ref>(a) for the Ce^3+ ion A.
The ground-state configuration of the Ce^3+ (4f^1) ion is given by a nearly free-ion configuration of a L=3,S=1/2 state in accordance with Hund's rule. The spin-orbit coupling splits this 14 dimensional space into J=5/2 and J=7/2 total angular momentum sectors and the ground-state is given by the six-dimensional J=5/2 (^2F_5/2) state. Since there is only one electron in the 4f orbitals, the wavefunctions of different states in this multiplet can be written as
|J=5/2,m_j=±5/2⟩= -√(1/7)|m_l=±2,m_s=±1/2⟩
+√(6/7)|m_l=±3,m_s=∓1/2⟩,
|J=5/2,m_j=±3/2⟩= -√(2/7)|m_l=±1,m_s=±1/2⟩
+√(5/7)|m_l=±2,m_s=∓1/2⟩,
|J=5/2,m_j=±1/2⟩= -√(3/7)|m_l=±0,m_s=±1/2⟩
+√(4/7)|m_l=±1,m_s=∓1/2⟩,
where |m_l,m_s⟩ is a 4f orbital state with orbital quantum number m_l and spin quantum number m_s. The CEF further splits it into three Kramers doublets |±5/2⟩, |±1/2⟩, and |±3/2⟩ with energies 0 meV, 5.82 meV, 14.38 meV, respectively <cit.>, as shown in Fig. <ref>(b).
§.§ Microscopic model for the orbit-lattice coupling
Previous Raman studies have shown that the doubly degenerate modes E_1g and E_2g split into left- and right-handed circularly polarized chiral phonon modes when a magnetic field is applied along the c-axis of the crystal, perpendicular to the plane of the components of the doubly degenerate phonon modes <cit.>. In CeCl_3, the E_1g mode shows the largest splitting in experiment, and we will therefore first focus our analysis on this mode, which involves the displacement of Cl^- ion along the c-axis. As there is only one E_1g phonon, the displacement pattern for this mode can be obtained directly from group theory and is given by
E_1g^1(a) =Q_a/2√(6)(0,0, 2 ẑ,-ẑ,-ẑ,-2 ẑ,ẑ,ẑ),
E_1g^1(b) =Q_b/2√(2)(0,0,0,ẑ,-ẑ,0,-ẑ,ẑ),
in the basis (Ce_A^3+,Ce_B^3+,Cl_1A^-,Cl_2A^-,Cl_3A^-,Cl_1B^-,Cl_2B^-,Cl_3B^-) and Q_a/b are the normal mode coordinates (amplitudes) of the two components a and b in units of Å√(amu), where amu is the atomic mass unit. We show the atomic displacements in Fig. <ref>(a). The displacements of the Cl^- ions modify the Coulomb potential around the Ce^3+ ions which perturb the electronic Hamiltonian on the magnetic ion.
We use a point-charge model to describe the crystal electric field of the system, in which the potential energy of an electron at position 𝐫 from Ce^3+ nucleus due to the n^th Cl^- ion is given by:
V(𝐫,𝐑_n)=e^2/4πϵ_01/|𝐑_n-𝐫|
where 𝐑_n=𝐑_0,n+𝐮_n is the displacement of the n^th ligand ion from Ce^3+ nucleus which depends on the equilibrium displacement 𝐑_0,n and the relative lattice displacement 𝐮_n arising from the phonon. Now, the perturbation introduced by a given phonon mode can be obtained by a Taylor expansion in lattice displacement 𝐮_n and after expressing these dispalcements in terms of normal coordinates Q_a,b and summing over all nearest neighbor ligands, the first order term is given by:
V(E_1g(a)) =[-0.06 xz+0.16 yz]Q_a eV/Å^3√(amu),
V(E_1g(b)) =[0.16 xz+0.06 yz]Q_b eV/Å^3√(amu).
Now, using spherical coordinates to express xz (yz)=r^2cosθsinθcosϕ (sinϕ) and writing the states in Eq. (<ref>)-(<ref>) in terms of 4f basis states with wavefunction ⟨ r|m_l,m_s⟩=R(r)Y_3^m_l(θ,ϕ) where R(r) is a radial part and Y_3^m_l(θ,ϕ)
is the spherical harmonic (see Appendix for details), we evaluate the matrix elements of above perturbations which are given by:
H_1(xz) =-2/7√(5)⟨ r^2⟩[ |5/2,±5/2> |5/2,±3/2>; |5/2±5/2> 0 ± 1; |5/2,±3/2> ± 1 0 ],
H_1(yz) =2/7√(5)⟨ r^2⟩[ |5/2,±5/2> |5/2,±3/2>; |5/2,±5/2> 0 i; |5/2,±3/2> -i 0 ],
where ⟨ r^2⟩=∫_0^∞ r^2|R(r)|^2r^2dr is the mean of the square of 4f electron radius. Now, we express the phonon displacement as
Q_a =ħ/√(ħω_0)(a+a^†)=0.06Å√(eV.amu)/√(ħω_0)(a+a^†),
Q_b =ħ/√(ħω_0)(b+b^†)=0.06Å√(eV. amu)/√(ħω_0)(b+b^†),
where we restored ħ and ħω_0 is the energy of phonon.
The orbit-lattice coupling operators connecting different electronic states from Eq. (<ref>) and Eq. (<ref>) become
Ô_a =ge^iθ|+5/2⟩⟨+3/2|-ge^-iθ|-5/2⟩⟨-3/2|+h.c.,
Ô_b =ige^iθ|+5/2⟩⟨+3/2|+ige^-iθ|-5/2⟩⟨-3/2|+h.c..
Here, we combined Eqs. (<ref>)-(<ref>) in order to obtain g = -√(0.16^2+0.06^2)2/7√(5)⟨ r^2⟩0.06/√(ω_0)eV^3/2/Å^2
and tan(θ)=0.16/0.06. Compared to the general expression of the orbit-lattice coupling in Eq. (<ref>) and Eq. (<ref>), we find that g_a=ig_b=ge^iθ. Following our previously derived model, this orbit-lattice coupling leads to a splitting of the E_1g mode into two circularly polarized phonon modes with opposite chirality. The split modes have orbital angular momenta of ±ħ, arising from the superpositions E_1g(a)± i E_1g(b) obtained from Eq. (<ref>) and Eq. (<ref>). This can also be seen by applying a C_3(z) rotation operation around each Ce^3+ site, and the mode is an eigenstate of the C_3(z) operator with eigenvalue e^i2π/3. The displacements associated with the two chiral phonon modes are depicted in Fig. <ref>(b) alongside the orbital transitions with which they hybridize. The orbital angular momentum for these modes arises from the relative phase between neighboring atoms.
§.§ Phonon Zeeman splitting and effective phonon magnetic moment
We now proceed to compute the energy splitting and effective phonon magnetic moments that can be associated with the chiral phonon modes. For the E_1g phonon mode, ω_0=22.75 meV and g= 7 meV/Å^2 <r^2>. With a mean-square radius of <r^2>∼ 0.1 Å^2, we get g∼ 0.7 meV. Now, the splitting of the phonon modes is given by Eq. (<ref>), where the relative contributions of the two additive terms depend on the values of electron-phonon coupling, g, the energy difference between the electronic excitation and the phonon mode, ω_0-Δ, and the occupancy difference, f_21. CeCl_3 is a paramagnetic system and the population difference for two states in the ground-state Kramer doublet is therefore given by,
f_21=tanh(5/2g^el_5/2B/k_BT),
where
g^el_5/2≈ 4/5μ_B is the electronic g-factor for J=5/2 states. In the regime, μ_B B≪ k_BT, we can approximate the population difference to linear order, f_21≈ 2μ_B B/k_BT.
In order to calculate the splitting, we assume an off-resonant condition as the electronic excitation energy, Δ≈ 16 meV results in |ω_0-Δ|≫g̃ which allows us to approximate Eq. (<ref>) by Eq. (<ref>) which gives,
γ B (Ω_ph^2-ω^2_0)+g̃^2 f_21ω_0≈g̃^2 ω_0 (γ B/ω_0-Δ_0+2μ_B B/k_BT),
where γ=(5/2-3/2)g^el_5/2=2/5 μ_B.
As a result, the relative contribution of the two terms in Eq. (<ref>) depends on the temperature. In the present case for the E_1g mode, the second term dominates in the low-temperature regime, but the two contributions become equal when k_B T≈ 5(ω_0-Δ_0)=50 meV, i.e. at 500 K.
We therefore only consider the contribution from the second term which depends on the difference in occupation of two states of the Kramers doublet |±5/2⟩.
The resulting splitting is shown in Fig. <ref>(a). We obtain a value of 1.24 meV (10 cm^-1) for the saturated phonon splitting, which is reasonably close to the value of 18 cm^-1 observed in Ref. <cit.>. The relative splitting reaches more than 5% at saturation, and the magnitude of the magnetic fields required to reach saturation increases with increasing temperature according to the dependence of tanh(χ B) on f_21. The value of the effective phonon magnetic moment μ_ph is inversely proportional to the temperature as shown in Fig. <ref>(c), and our model predicts μ_ph=2.9 μ_B at T=10 K, several orders of magnitude higher than those produced by purely ionic circular charge currents <cit.>. The value of the saturation splitting and μ_ph at three different temperatures are presented in Table <ref>, which range between 0.1 μ_B at room temperature and up to 9.3 μ_B at 2 K.
A similar analysis can be done for the E_2g^1 (12.1 meV) and E_2g^2 (21.5 meV) phonon modes, whose displacements are depicted in Fig. <ref>(c). Here, because of the existence of two phonon modes with the same symmetry, the displacements cannot be unambiguously determined from group theory, and we compute the phonon eigenvectors using density functional theory calculations published in prior work <cit.>. Using the point-charge model, we calculate the orbit-lattice coupling for these phonons, please see the Appendix for details. These phonons couple with orbital excitations between |m_j=±5/2⟩ and |m_j=±1/2⟩, as illustrated in Fig. <ref> (c) and it leads to phonon Zeeman splitting, as shown in Fig. <ref>(b).
Our model predicts effective phonon magnetic moments of μ_ph=0.4 μ_B and μ_ph=0.27 μ_B at T=10 K and a saturation splitting of 0.18 meV (1.5 cm^-1) and 0.12 meV (1 cm^-1) for E_2g^1 and E_2g^2, respectively as shown in Table <ref>. According to Ref. <cit.>, the observed saturation splitting of the E_2g^1 mode is 0.87 meV (7 cm^-1) which is about four times the values obtained from our microscopic model. On the other hand, no splitting was observed for E_2g^2 mode in the same experiment. Here again, the observed phonon magnetic moment decreases with temperature as shown in Fig. <ref>(c). This disagreement could be either due to the crudeness of our point-charge model or due to the resolution of this experiment which is limited to 1 cm^-1.
§ CHIRAL PHONONS IN 3D MAGNETS
In the previous section, we discussed the example of rare-earth trihalides, where the giant magnetic response of chiral phonons originates from the coupling of CEF-split electronic levels with chiral optical phonons. In this section, we show that chiral optical phonons in 3d-electron magnets with octahedral ligand configuration can yield a similarly strong response. We begin with a general analysis of orbital configurations and then perform calculations for the concrete example of CoTiO_3.
In materials with octahedral ligand configurations around the magnetic ion, the CEF is usually strong with a splitting of e_g and t_2g orbitals of the order of a few eV for most materials. This renders the coupling between the e_g-t_2g electronic excitations and phonons weak and thus makes a magnetic response arising directly from these transitions unfeasible. In many materials, however, the t_2g and e_g manifolds are split further due to either lattice distortions or spin-orbit coupling <cit.>. Both cases host electronic transitions with energies comparable to those of the optical phonon modes in the system that allows them to couple strongly and hybridize. Consider a magnetic transition-metal ion surrounded by a trigonally-distorted octahedron of ligand ions, as depicted in Fig. <ref>(a), which are common in face-sharing octahedral geometries.
In this scenario, the site symmetry for a magnetic ion is reduced to
C_3 from O_h and the t_2g manifold splits according to l_z, where the z-axis is oriented along the C_3 rotation axis, as shown in Fig. <ref>(b).
The energy of the states depends on the sign of the trigonal distortion, and the t_2g orbitals split into the following two manifolds
|l_z=± 1⟩ =-1/√(3)(d_xy∓ id_x^2-y^2)±i/√(6)(d_xz∓ id_yz),
|l_z=0⟩ =d_3z^2-r^2.
On the other hand, if spin-orbit coupling is stronger than the splitting induced by the trigonal distortion, we need to consider eigenstates characterized by the total angular momentum, J, as shown in Fig. <ref>(c). In this limit, the trigonal distortion introduces a perturbation of the form H_tri=δ J_z^2 to the Hamiltonian of magnetic ion which splits J=3/2 multiple into two manifolds with m_j± 1/2 and m_j=± 3/2, but does not affect the J=1/2 states.
In both cases, there are low-lying electronic transitions that involve a transfer of angular momentum of Δ m=± 1, namely the transition from |l_z=±1⟩ to |l_z=0⟩ in the case of a trigonal distortion, and |J=1/2,m_j=±1/2⟩ to |J=3/2,m_j=±3/2⟩ in the case of spin-orbit coupling. These transitions are similar in nature to transitions from |m_j=± 5/2⟩ to |m_j=± 3/2⟩ in the case of the rare-earth trihalides in the previous section.
Next, let us consider an optical phonon mode that can be characterized by the E_g irreducible representation of the C_3 point group. There are many basis functions (corresponding to displacement patterns) that transform according to this irreducible representation for the system shown in Fig. <ref>. One such possibility is the xy in-plane motion of a magnetic ion M located at the center of the trigonally distorted octahedra. The two components of the E_g mode in this case are simply represented by the motion of the ion, M, in the x and y directions, respectively, and result in the following form of perturbation to the CEF,
V(E_g(a)) ∝ xz Q_a eV/Å^3√(amu),
V(E_g(b)) ∝ yz Q_b eV/Å^3√(amu),
where Q_a/b
are the normal mode coordinates associated with E_g(a/b), similar to Eqs. (<ref>) and (<ref>) in the case of the rare-earth trihalides discussed in the previous section. This perturbation ultimately results in a coupling similar to the one discussed in Eqs. (<ref>) and (<ref>) of Sec. <ref> and can lead to phonon chirality and a phonon Zeeman effect.
The magnitude of this effect depends on the phonon energies, electronic wavefunctions of states involved in low-energy excitations and their energies which would be material specific. However, as discussed in the previous section, the phonon magnetic moment can be significantly larger if the electronic excitation energy is closer to phonon energy. The typical energy scale associated with SOC and trigonal distortion is usually less in the range of 10-100 meV for d electron systems which puts these electronic excitations in close proximity with optical phonons and hence makes the above effect feasible.
Additionally, most transition-metal systems have significant superexchange interactions with neighboring spins originating in the large spatial extent of d-orbitals. As a result, one should expect a rather different temperature trend for the phonon magnetic moment μ_ph below magnetic ordering temperatures, which can be evaluated by including the exchange mean-field contributions.
We next perform calculations for the concrete example of the XY-quantum magnet CoTiO_3 which is known to have spin-orbit excitations with energies comparable to a range of optical phonons in the system <cit.>.
§.§ Structural and electronic properties of CoTiO_3
The transition-metal oxide CoTiO_3 crystallizes in an ilmenite structure with a trigonal space group R3 (point group 3). Each of the Co^2+ ions is surrounded by a trigonally distorted octahedral cage of O^2- ions, as shown in Fig. <ref>(a). The rhombohedral unit cell contains two Co^2+ ions, which we denote by A and B. The Co^2+ ions are arranged in two-dimensional slightly buckled honeycomb lattices, which are stacked in an ABC sequence along the c-axis, with neighboring planes displaced diagonally by one-third of the unit cell. Below the Néel temperature of T_N = 38 K, the magnetic moments order ferromagnetically within the ab-planes and are coupled antiferromagnetically along the c-axis <cit.>. The rhombohedral unit cell contains 10 ions, and group theory predicts ten Raman-active phonons, Γ_R=5A_g⊕5E_g and eight infrared-active modes Γ_IR=4A_u⊕4E_u where A and E are non-degenerate and doubly degenerate modes, respectively <cit.>.
The magnetic properties of CoTiO_3 are determined by the three unpaired spins on the magnetic Co^2+ (3d^7).
The spin-orbit coupling and trigonal distortion have the same energy scale in CoTiO_3 <cit.>, and the ground state of Co^2+ (3d^7), S=3/2 can be considered an effective S̃=1/2 spin state, as shown in Fig. <ref> (b). The two low-energy manifolds are predominantly composed of j_eff=1/2 and j_eff=3/2 angular momentum states, respectively, and their wavefunctions are given by:
|ψ_1/2⟩ = |J=1/2,m_j=±1/2>
= 1/√(2)|m_l=∓1̃,m_s=±3/2⟩
-1/√(3)|m_l=0̃,m_s=±1/2⟩
+1/√(6)|m_l=±1̃,m_s=∓1/2⟩
and
|ψ_3/4⟩= |J=3/2,m_j=±3/2>
= √(3/5)|m_l=0̃,m_s=±3/2⟩
-√(2/5)|m_l=±1̃,m_s=±1/2⟩
where the states |m_l=ĩ,m_s⟩ arise from the effective l_eff=1 and S=3/2 states comprised of three holes and m_l,m_s denote the magnetic quantum number along the z-direction of the local coordinate system of the two Co^2+ ions (check Fig. <ref> in the Appendix).
For T>T_N, both manifolds remain doubly degenerate and the two manifolds are separated in energy by 23.5 meV <cit.> as measured in neutron-diffraction experiments. These low-energy excitations between spin-orbit split states are very close in energy with two E_g optical phonons at 26 meV and 33 meV <cit.>. This close proximity in energy enables the hybridization between phonons and spin-orbit excitations, which in turn can produce phonon chirality and therefore a phonon Zeeman effect.
§.§ Microscopic model for the orbit-lattice coupling
Phonons associated with irreducible representations other than fully symmetric ones lower the site symmetry of the Co^2+ and hence can mix different electronic states. Here, we consider the two E_g modes with energies of 26 meV and 33 meV that are close to the orbital transitions.
As in Sec. <ref>, we first evaluate the strength of the coupling using a point-charge model with atomic displacements of phonons obtained from group theory and first-principles calculations.
We first find the basis functions for different phonon modes by using projection operators for the irreducible representation E_g. The phonon displacements of two E_g modes under consideration are a superposition of these basis functions and cannot be obtained from purely group theoretical tools.
However, first-principal calculations in previous works <cit.> allowed us to approximate the lattice displacements in terms of the basis functions we obtained. The first E_g mode is predominantly associated with the motion of the Co^2+ ion in the ab-plane, which, in the basis function of the two components of this E_g mode, can be approximated as x and y motion of the Co^2+ ion, as shown in Fig. <ref>(a).
For the E_g^2 (33 meV) mode, we used a superposition of basis functions that matches the displacements (Fig. <ref>(b)) to the second E_g mode in previous first-principles work <cit.>.
For the second mode, we tried several superpositions of different basis functions and have considered the one which showed displacements (Fig. <ref>(b)) similar to the second E_g mode in previous first-principles work <cit.>.
This E_g mode primarily includes the motion of ligand O^2- ions.
For the displacements shown in Fig. <ref>(b), we find that the modification of the CEF around the Co^2+ ions on the A/B sites is given by:
V^A/B (E_g^1(a))=[-0.56 eVxz-0.51 eV (x^2-y^2)+𝒪(r^3)]Q_a,
V^A/B (E_g^1(b))=±[1.0 eV xy-0.56 eV yz+𝒪(r^3)]Q_b,
for the E_g^1 mode, and
V^A/B (E_g^2(a))=[-0.04 eV xy-0.61 eV xz]Q_a +
[0.72 eV yz-0.14 eV (x^2-y^2)+𝒪(r^3)]Q_a
V^A/B (E_g^2(b))=±[0.28 eV xy-0.72 eV xz-0.61 eV yz]Q_b
±[-0.02 eV (x^2-y^2)+𝒪(r^3)]Q_b,
for the E_g^2 mode
where Q_a/b are the normal mode coordinates associated with two-components of the E_g phonon expressed in units of Å.√(amu) in a local coordinate system around each Co^2+ ion. The z- and y-axis for B sites are opposite to that of A sites (see the Appendix for more details) which explains an extra negative sign in the b component correction for B sites.
In the basis, |ψ_1/2⟩=|J=1/2,m_j=±1/2⟩ and |ψ_3/4⟩=|J=3/2,m_j=±3/2⟩, the resulting coupling (also shown in Fig. <ref>(c) takes the following form for the two components of E_g mode:
Ô^A/B_E_g^a =ge^iϕ_ab|ψ_1^A⟩⟨ψ_3^A|-ge^-iϕ_ab|ψ_2^A⟩⟨ψ_4^A|+ h.c.,
Ô^A/B_E_g^b =± ige^iϕ_ab|ψ_1^A⟩⟨ψ_3^A|± ige^- iϕ_ab|ψ_2^A⟩⟨ψ_4^A|+ h.c.,
where g for two E_g modes are
g_E_g^(1)≈ 0.3 r_0^2/Å^2 meV, and g_E_g^(2)≈ 0.4 r_0^2/Å^2 meV,
where r_0^2=⟨ r^2 ⟩≈ 1 Å^2 for 3d orbitals in Co^2+ <cit.> (see Appendix <ref>). The phase ϕ_ab depends on the ratio of coefficients of xz (xy) and yz (x^2-y^2) terms but does not affect the self-energy terms.
To obtain the values of coupling strength g, we first express the phonon displacements in terms of the phononic creation and annihilation operators, a, a^†, b,b^†,
Q_a=ħ/√(ħω_0)(a+a^†)=0.06Å√(eV amu)/√(ħω_0)(a+a^†).
where ħω_0 is the phonon energy (we restored ħ for the purpose of this equation), and then using Eqs. (<ref>-<ref>) and by expressing the states in Eq. (<ref>) and Eq. (<ref>) in terms of three-particle d-orbital states, we evaluate the matrix elements for the crystal-field perturbation term between different states (see Appendix for details). This coupling term maps to Eq. (<ref>) and Eq. (<ref>) in Sec. <ref>, and accordingly, the E_g mode will split into two circularly polarized modes with opposite chirality when a magnetic field is applied along the c-axis. The split modes will have angular momentum of ±ħ along the c-axis which arises from the orbital angular momentum possessed by the superposition E_g(a)± i E_g(b) shown in Fig. <ref>(c).
§.§ Phonon Zeeman splitting and effective phonon magnetic moment
In order to evaluate the phonon Zeeman splitting for both of these E_g modes, we can apply the model discussed in Sec. <ref> with slight modifications. For T>T_N, the system is in the paramagnetic phase and the eigenstates for the two manifolds involved in electronic excitations are identical to the basis states |ψ_1/2⟩ and |ψ_3/4⟩ described in Eq. (<ref>) and Eq. (<ref>), respectively. In this case, after accounting for the electron-phonon interaction described in Eq. (<ref>) and Eq. (<ref>), the non-interacting Green's function is given by:
𝐃^-1(ω)=[ ω^2-ω_0^2/2ω_0-g̃^2(f_1^AΔ_1^A/ω^2-(Δ_1^A)^2+f_2^AΔ^A_2/ω^2-(Δ_2^A)2+A⟺ B) ig̃^2(-f_1^Aω/ω^2-(Δ_1^A)^2+f_2^Aω/ω^2-(Δ_2^A)^2-A⟺ B); -ig̃^2(-f_1^Aω/ω^2-(Δ_1^A)^2+f_2^Aω/ω^2-(Δ_2^A)^2-A⟺ B) ω^2-ω_0^2/2ω_0-g̃^2(f_1^AΔ_1^A/ω^2-(Δ_1^A)^2+f_2^AΔ_2^A/ω^2-(Δ_2^A)^2+A⟺ B), ]
where again g̃^2=4π g^2.
The correction to the non-interacting phonon Green's function here is similar to that in Eq. (<ref>) discussed in Sec. <ref>. The only difference is that the off-diagonal term has contributions from two magnetic ions denoted by A and B. The contributions from the two ions come with opposite signs, but it is also worth noticing that the electron-phonon interactions were evaluated within the local coordinate systems of each of the ions. The local z-coordinates for two ions point in opposite directions and thus for a magnetic field applied along the c axis of the crystal, the population difference for two states in the lower Kramers doublet (J=1/2) are opposite as well - hence the two contributions add up.
Following the same procedure as in Sec. <ref> and Sec. <ref>, we obtain the phonon Zeeman splitting shown in Fig. <ref>(a), where we have used the values of orbit-lattice coupling from Eq. (<ref>), Δ_1=Δ_2= 23.5 meV,
magnetic moment μ^gd_el=1.9 μ_B for the J=1/2 states (ground state manifold) of Co^2+ on the basis of range mentioned in Ref. <cit.>, and ω_ph=26 meV and ω_ph=33 meV for E_g^1 and E_g^1 modes, respectively.
At T=50 K, this leads to significant splitting for both modes as shown in Fig. <ref>(b) yielding a phonon magnetic moments of μ_ph=0.2 μ_B and 0.1 μ_B for E_g^1 and E_g^2, respectively. In the paramagnetic regime, we expect that μ_ph scales as 1/T with temperature according to Eq. (<ref>), and hence drops sharply when T is increased, as shown in Fig. <ref>(c) and also in table <ref>.
However, this analysis does not work below the Neel temperature. For T<T_N, magnetic order sets in, and spins develop a finite in-plane magnetic moment. The exchange field arising from spin-ordering alters the single-ion energy levels and their eigenstates. Without loss of generality, we can assume that the resulting mean-field points in the x-direction, which splits up the lower
ground-state Kramers doublet even in the absence of the external magnetic field, as illustrated in Fig. <ref>(b). The upper manifold is not affected, but new eigenstates are formed for the lower manifold that are given by
[ μ^gd_elB_z^α h_ex(T); h_ex(T) -μ^gd_elB_z^α ]|ψ_1̃/2̃^α⟩=E_1̃/2̃|ψ_1̃/2̃^α⟩
where the Hamiltonian is written in the basis (|ψ_1^α⟩,|ψ_2^α⟩) defined in Eq. (<ref>),
α=A,B denotes the Co^2+ ion site, h_ex(T) is the exchange mean-field and B_z is the external magnetic field, E_2̃=-E_1̃=√((μ^gd_elB_z)^2+h_ex^2(T))
We can now apply our orbit-lattice coupling model for these new eigenstates and obtain the phonon energies by solving Det(𝐃^-1(ω))=0. The diagonal components are given by
𝐃^-1|_αα=
ω^2-ω_0^2/2ω_0-2g̃^2(f_1̃E_1̃3(cosθ/2)^2/ω^2-E_1̃3^2+f_1̃E_1̃4(sinθ/2)^2/ω^2-E_1̃4^2)
-2g̃^2(f_2̃E_2̃3(sinθ/2)^2/ω^2-E_2̃3^2+f_1E_24(cosθ/2)^2/ω^2-E_2̃4^2),
for α=a,b and off-diagonal components are given by:
𝐃^-1|_ab =-𝐃^-1|_ba=
2ig̃^2(-f_1̃ω(cosθ/2)^2/ω^2-E_1̃3^2+f_1̃ω(sinθ/2)^2/ω^2-E_1̃4^2)
+2ig̃^2(-f_2̃ω(sinθ/2)^2/ω^2-E_2̃3^2+f_2̃ω(cosθ/2)^2/ω^2-E_2̃4^2),
where cos(θ)=μ^gd_elB√((μ^gd_elB)^2+h_ex^2(T)), E_1̃3=E_1̃4=Δ_0-E_1̃, and E_2̃3=E_2̃4=Δ_0-E_2̃, and h_ex(T)=h_0√((1-T/T_N)) is the mean-field as a function of temperature.
Here, the structure of eigenstates (|ψ_1̃⟩,|ψ_2̃⟩ ) is such that the off-diagonal term comes out to be zero when these states are an equal superposition of |ψ_1⟩ and |ψ_2⟩,
i.e., for B=0 and hence no phonon energy splitting can occur in the absence of a magnetic field as expected. However, once we apply a magnetic field along the c-axis, the eigenstates in the ground-state manifold are no longer an equal superposition of up and down spin and develop a net magnetic moment along the c-direction, and as a result, the off-diagonal term becomes proportional to B, which leads to a splitting of the previously degenerate phonon modes as shown in Fig. <ref>(a,b). Interestingly, the splitting does not saturate even at very low temperatures, which can be understood on the basis of exchange interactions.
In this case, the temperature dependence enters in two different ways: on the one hand, it determines the thermal population of the two states in the ground state manifold, and on the other hand also, the spin-polarization of each state through the temperature-dependence of the exchange mean-field. This makes the problem analytically untractable, and hence we obtain the phonon frequencies by numerically evaluating the poles of Green's functions. We show the combined temperature dependence of μ_ph above and below the Néel temperature in Fig. <ref>(c), where we have considered a maximum exchange mean-field of h_0=3 meV on the basis of the values presented in Ref. <cit.>.
In both cases, T>T_N and T<T_N, the phonon splitting is non-zero only if the populations and eigenstates of the ground-state manifold are such that the magnetic ion carries a net magnetic moment along the c-axis. In the paramagnetic case, the magnetic moment is directly related to the population difference of the two states in the lower Kramers doublet, as the two states have a magnetic moment along the c-axis but with opposite signs. In contrast, in the antiferromagnetically ordered state, we further need to take into account the net magnetic moment of each state in addition to the population difference between the two states. The direction of the net magnetic moment of each state is determined by the combined effect of the in-plane exchange mean field and out-of-plane applied magnetic field. As a result, the net magnetization of the sample increases at a slower rate with the applied magnetic field in the T<T_N case compared to the paramagnetic region. This rate keeps on decreasing as the temperature is decreased further and leads to a maximum of the phonon magnetic moment below the Néel temperature, in contrast to the pure paramagnetic case in the rare-earth trihalides. Overall, we can expect that the temperature trend of the phonon g-factor should be similar to the temperature dependence of the magnetic susceptibility along the c-direction.
Our calculations for CoTiO_3 have shown two things: 1) applying an external magnetic field can produce chiral phonons with large effective magnetic moments on the order of 0.1μ_B, and 2) the phonon g-factor follows the same trend as the magnetic suscpetibility. This intuition can be extended to ferromagnets, where we expect that the phonon Zeeman splitting would saturate very quickly near the critical temperature and chiral phonons would remain split with fixed energy separation below T_C. It also indicates that in some cases, it is possible to have chirality-dependent phonon energy splitting even in the absence of an external magnetic field <cit.>.
§ DISCUSSION
In summary, we have developed a microscopic model that describes the hybridization of doubly degenerate phonon modes with electronic orbital transitions between doublet states. An applied magnetic field splits the degeneracy of the doublets and therefore that of the phonons coupled to it, resulting in circularly polarized phonon modes with opposite chirality. The splitting is determined by the population asymmetry between the ground-state doublets, which makes the mechanism temperature dependent. The splitting of the phonon frequencies is linear in the limit of small magnetic fields, which is consistent with the phenomenological notion of the phonon Zeeman effect <cit.>, and which allows us to assign an effective magnetic moment to the chiral phonon modes. The specific form of the orbit-lattice coupling leading to these phenomena depends on the point-group symmetry and orbital configuration of the material. Furthermore, in order for the mechanism to be significant, the phonon modes and orbital transitions need to be on similar energy scales. We have therefore applied the model and computed phonon Zeeman splittings and effective phonon magnetic moments for the specific cases of CeCl_3, a 4f-electron paramagnet, in which the orbital transition between the doublet states are determined by the crystal electric field, as well as CoTiO_3, a 3d-electron antiferromagnet, in which the orbital transitions are determined by spin-orbit coupling and a trigonal distortion.
In the case of CeCl_3, the effective phonon magnetic moment increases monotonically with decreasing temperature over the entire investigated temperature spectrum, because the spins of the Ce^3+ ions order only at very low temperatures of <0.1 K <cit.>, not considered here. We predict values of several μ_B at cryogenic temperatures that correspond to phonon frequency splittings of the order of 10 cm^-1, corroborating early experimental measurements <cit.>. Even at room temperature, the effective magnetic moments of the phonon modes range between 0.01 μ_B-0.1 μ_B, orders of magnitude larger than those generated by purely ionic charge currents <cit.>. In the case of CoTiO_3, we distinguish between the high-temperature paramagnetic and the low-temperature antiferromagnetic phases. The paramagnetic phase behaves similar to the case of CeCl_3, with a monotonically increasing effective phonon magnetic moment for decreasing temperatures down to the Néel temperature. Below the Néel temperature however, the value of the magnetic moment peaks at approximately 0.17μ_B, because the exchange mean-field in the ordered state exhibits an additional, competing temperature dependence that produces a global maximum of the effective magnetic moment of the phonon. Despite the decreasing trend of the magnetic moment at high temperatures, even at room temperature, we still obtain μ_ph=0.07μ_B, an order of magnitude larger than that predicted by ionic charge currents. Combining both temperature dependencies below and above T_N, the effective phonon magnetic moment follows roughly the trend of the magnetic susceptibility.
While we have looked at two particular examples in this manuscript, the theory developed in Sec. <ref> is general to all materials that exhibit doubly degenerate phonon modes and orbital transitions between doublet states with comparable energy scales. We therefore expect that in particular more 3d transition-metal oxide compounds may show the proposed phenomena, which host a variety of materials with trigonal point-group symmetries and octahedral coordinations of ligand ions. Beyond transition-metal oxides, 4d-electron magnets could be interesting candidates, because they possess larger spin-orbit couplings and therefore higher transition energies between doublet states, which allows hybridizations with high-frequency phonon modes above 10 THz (40 meV). RuCl_3, for example, possesses orbital excitations on the order of 100 meV <cit.> as well as doubly degenerate phonon modes with energies around 50 meV.
In magnetically ordered materials, the details of the mechanism for phonon Zeeman splitting further depend on the exchange interactions between the magnetic ions, and an upcoming challenge will be to investigate how the phenomenon unfolds when interactions such as superexchange, itinerant electrons, or ring-exchange interactions <cit.> are present. Beyond CEF and spin-orbit excitations, the mechanism can be extended to include hybridizations between doubly degenerate phonon modes and other electronic or collective excitations, such as low-energy charge-transfer excitations or magnons, which show strong magnetic-field dependence <cit.>.
Finally, we point out that we have investigated only Raman-active phonon modes with E_(i)g symmetries in this study. The same evaluation can be done for infrared-active phonon modes, E_(i)u symmetries in our investigated materials, which can be resonantly driven with ultrashort laser pulses in the terahertz and mid-infrared spectral range. A recent study proposed that infrared-active phonon modes driven in paramagnetic CeCl_3 can potentially produce giant effective magnetic fields through the effective magnetic moment of the phonons <cit.>, and this phenomenon was subsequently experimentally demonstrated in a paramagnetic oxide-ferrimagnetic garnet heterostructure <cit.>. This mechanism should be readily applicable to the E^1_u and E^2_u modes in CoTiO_3 and offers the potential for an unprecedented spin-switching protocol that could establish a new paradigm in ultrafast spintronics and data processing.
We thank Sebastian Stepanow (ETH Zurich), Xiaoqin (Elaine) Li, David Lujan, and Jeongheon Choe for useful discussions. This research was primarily supported by the National Science Foundation through the Center for Dynamics and Control of Materials: an NSF MRSEC under Cooperative Agreement No. DMR-1720595. G.A.F. acknowledges additional support from NSF DMR-2114825 and from the Alexander von Humboldt Foundation.
§ DERIVATION OF THE PHONON ZEEMAN SPLITTING AND EFFECTIVE MAGNETIC MOMENTS OF PHONONS
In this section, we provide more details on the derivations of the equations in the main text. We begin by considering a degenerate phonon mode with two components that is described by the Hamiltonian
H_ph=ω_0(a^† a+ b^† b).
We only consider zone-centered phonon modes and can accordingly drop the momentum dependence in the phonon operators and energies. In order to account for the effect of electron-phonon interactions on the phonon spectrum, we use a Green's function formalism. For the non-interacting system, the Green's function matrix is given by
𝐃_0(ω)=[ D_0^aa(ω) 0; 0 D_0^bb(ω) ],
where the components are given by
D_0^aa(ω)=D_0^bb(ω)=2ω_0/ω^2-ω_0^2.
The phonon frequency, ω_0, can be trivially retrieved by solving Det(𝐃_0^-1(ω))=0.
We obtain these Green's functions from the Fourier transform of time-dependent phonon propagators,
D^aa_0( t-t') =-iθ(t-t')⟨0||A(t)A(t')|0⟩-iθ(t'-t)⟨0|A(t')A(t)|0⟩,
D^bb_0( t-t') =-iθ(t-t')⟨0||B(t)B(t')|0⟩-iθ(t'-t)⟨0|B(t')B(t)|0⟩,
where A(t)=a(t)+a^†(t) and B(t)=b(t)+b^†(t) with a(t)=ae^-iω_0t and b_k(t)=be^-iω_0t.
We next consider the electronic Hamiltonian, which can be expressed in second quantization as
H_el=∑_i=1^4ε_ic_i^† c_i,
where c_i^† and c_i are the creation and annihilation operators for electrons in state i on the magnetic ion. Their Green's functions read
G_0^ii(t-t')=-i<0|𝒯[c_i(t)c_i^†(t')]|0>=-i[θ(t-t')<0|c_i(t)c_i^†(t')|0>-θ(t'-t)<0|c_i^†(t')c_i(t)|0>],
where the time-dependent operators are given by
c_i(t)=e^iε_ic_i^† c_itc_ie^-iε_ic_i^† c_it=e^-iε_itc_i.
We can therefore write the Green's function as
G_0^ii(t-t')=-i[θ(t-t')(1-f_i)-θ(t'-t)f_i]e^-iε_i(t-t'),
where f_i=<0|c_i^† c_i|0> is the occupation number for state i, given by the Fermi-Dirac distribution. A Fourier transform of this expression yields
G_0^ii(ω)=1-f_i/ω-ε_i+iη+f_i/ω-ε_i-iη.
The electron-phonon interaction of Eqs. (<ref>) and (<ref>) from the main text can therefore be expressed in second quantization as
H_el-ph =V^a+V^b,
where
V^a =∑_i,jg(a^†+a)Γ^a_ij=g(a^†+a)(c_3^† c_1+c_1^† c_3)-g(a^†+a)(c_4^† c_2+c_2^† c_3),
V^b =∑_i,jg(a^†+a)Γ^a_ij=ig(b^†+b)(c_3^† c_1-c_1^† c_3)+ig(b^†+b)(c_4^† c_2-c_4^† c_2).
Including the electron-phonon interaction, the interacting phonon propagator, 𝐃, and electronic propagator, 𝐆, are given by
𝐃(q,ω)=D_0(q,ω)+D_0(q,ω)Π(q,ω)𝐃(q,ω),
where Π(q,ω)=D_0^-1-𝐃^-1 is the phonon self energy (the main quantity of interest for us), as well as
𝐆(q,ω)=G_0(q,ω)+G_0(q,ω)Σ(q,ω)𝐆(q,ω).
In a perturbative treatment, Π and Σ can be calculated from non-interacting Green's functions. The full expression is given by
[ 𝐃^aa 𝐃^ab; 𝐃^ba 𝐃^bb ]=[ D_0^aa D_0^ab; D_0^ba D_0^bb ]+[ D_0^aa D_0^ab; D_0^ba D_0^bb ]Π[ 𝐃^aa 𝐃^ab; 𝐃^ba 𝐃^bb ],
where the self-energy term is given by
Π=[ Π_13^aa+Π_31^aa+ Π_24^aa+Π_42^aa Π_13^ab+Π_31^ab+Π_24^ab+Π_42^ab; Π_13^ba+Π_31^ba+Π_24^ba+Π_42^ba Π_13^bb+Π_31^bb+Π_24^bb+Π_42^bb ].
We now approximate this term with its non-interacting value,
Π_ij^αχ(ω)≈Π_ij^αχ(ω)=i∫ dω' G_0^ii(ω+ω')G_0^jj(ω')Γ_ij^αΓ_ji^χ.
Together with the frequency-dependent electronic Green's function,
G_0^ii(ω)=1-f_i/ω-ε_i+iη+f_i/ω-ε_i-iη,
and using Eqs. (<ref>) and (<ref>) to write down Γ_13^a=Γ_31^a=-Γ_24^a=-Γ^a_42=g and Γ_13^b=-Γ_31^b=Γ_24^b=-Γ^b_42=ig.
∫ dω' G_0^ii(ω+ω')G_0^jj(ω')=∫ dω'( 1-f_i/ω+ω'-ε_i+iη+f_i/ω+ω'-ε_i-iη)( 1-f_j/ω'-ε_j+iη+f_j/ω'-ε_j-iη)
=
∫ dω'[ ((1-f_i)(1-f_j)/(ω+ω'-ε_i+iη)(ω'-ε_j+iη))+((1-f_i)f_j/(ω+ω'-ε_i+iη)(ω'-ε_j-iη))]
+ ∫ dω'[(f_i(1-f_j)/(ω+ω'-ε_i-iη)(ω'-ε_j+iη))+(f_if_j/(ω+ω'-ε_i-iη)(ω'-ε_j-iη))]
We utilize the relation lim_η→ 01/x+iη=P(1/x)+iπδ(x) and obtain
∫ dω'[ ((1-f_i)(1-f_j)/(ω+ω'-ε_i+iη)(ω'-ε_j+iη))] =(1-f_i)(1-f_j)/ω-ε_ij∫ dω'(1/ω'-ε_j+iη-1/ω+ω'-ε_i+iη)
=0,
∫ dω'[ ((1-f_i)f_j/(ω+ω'-ε_i+iη)(ω'-ε_j-iη))] =(1-f_i)f_j/ω-ε_ij+2iη∫ dω'(1/ω'-ε_j-iη-1/ω+ω'-ε_i+iη)
= -i2π(1-f_i)f_j/ω-ε_ij+2iη,
∫ dω'[ (f_i(1-f_j)/(ω+ω'-ε_i+iη)(ω'-ε_j+iη))] =f_i(1-f_j)/ω-ε_ij-2iη∫ dω'(1/ω'-ε_j+iη-1/ω+ω'-ε_i-iη),
=i2πf_i(1-f_j)/ω-ε_ij-2iη
∫ dω'[ (f_if_j/(ω+ω'-ε_i-iη)(ω'-ε_j-iη))] =f_if_j/ω-ε_ij∫ dω'(1/ω'-ε_j-iη-1/ω+ω'-ε_i-iη)
=0,
as well as
∫ dω' G_0^ii(ω+ω')G_0^jj(ω')=-i2π(1-f_i)f_j/ω-ε_ij+2iη+i2πf_i(1-f_j)/ω-ε_ij-2iη.
In the limit η→0, this expression becomes
∫ dω' G_0^ii(ω+ω')G_0^jj(ω')=i2πf_i-f_j/ω-ε_ij.
This allows us to obtain closed forms for the components of the self energy,
Π_13^aa+ Π_31^aa = Π_13^bb+ Π_31^bb=2π g^2 (f_1-f_3)(1/ω-ε_13-1/ω-ε_31)=2π2g^2 (f_1-f_3)ε_13/ω^2-ε_31^2,
Π_24^aa+ Π_42^aa =Π_24^bb+ Π_42^bb=2π g^2 (f_2-f_4)(1/ω-ε_24-1/ω-ε_42)=2π2g^2 (f_2-f_4)ε_24/ω^2-ε_42^2,
Π_13^ab+ Π_31^ab =-(Π_13^ba+ Π_31^ba)=-2π ig^2 (f_1-f_3)(1/ω-ε_13+1/ω-ε_31)=2π-2ig^2 (f_1-f_3)ω/ω^2-ε_31^2,
Π_24^ab+ Π_42^ab =-(Π_24^ba+ Π_42^ba)=2π ig^2 (f_2-f_4)(1/ω-ε_13+1/ω-ε_31)=2π2ig^2 (f_2-f_4)ω/ω^2-ε_42^2.
Inserting these expressions into Eq. (<ref>), we get
𝐃^-1=[ ω^2-ω_0^2/2ω_0-g̃^2(f_1Δ_1/ω^2-Δ_1^2+f_2Δ_2/ω^2-Δ_2^2) ig̃^2(-f_1ω/ω^2-Δ_1^2+f_2ω/ω^2-Δ_2^2); -ig̃^2(-f_1ω/ω^2-Δ_1^2+f_2ω/ω^2-Δ_2^2) ω^2-ω_0^2/2ω_0-g̃^2(f_1Δ_1/ω^2-Δ_1^2+f_2Δ_2/ω^2-Δ_2^2) ],
where g̃^2=4π g^2, Δ_1=ε_31, Δ_2=ε_42 and we assume the excited state to be unoccupied, f_3=f_4=0. The modified energies can then be obtained by solving Det(𝐃^-1)=0. The result of this equation depends on the application of a magnetic field, and we consider two cases.
Case 1: B=0. Here, f_1=f_2=f_0/2 and Δ_1=Δ_2=Δ. In this scenario, the off-diagonal term in 𝐃^-1 is zero and the evaluation of Det(𝐃^-1(ω))=0 reduces to:
(ω^2-ω_0^2)(ω^2-Δ^2)-2g̃^2f_0ω_0Δ =0,
(ω^2-ω_0^2)(ω^2-Δ^2)-2g̃^2f_0ω_0Δ =0,
which have identical solutions, indicating the doubly degenerate nature of phonon and electronic excitations. The solutions corresponding to the phonon and electronic excitation branches are respectively given by:
Ω_ph≡ω_ph(B=0)=(ω_0^2+Δ^2/2+√((ω_0^2-Δ^2/2)^2+2g̃^2f_0ω_0Δ))^1/2,
Ω_el≡ω_el(B=0)=(ω_0^2+Δ^2/2-√((ω_0^2-Δ^2/2)^2+2g̃^2f_0ω_0Δ))^1/2.
This coupling modifies the phonon and electronic excitation energies but doesn't lift the degeneracy of two excitations. However, if the bare frequencies for two excitations are close, then even a weak electron-phonon coupling term can introduce significant mixing and the excitations are no longer purely phononic or electronic in nature. We have focused only on the off-resonant case where these aspects can be safely ignored.
Case 2: B≠ 0. We next apply an external magnetic field, 𝐁=B ẑ, which lifts the degeneracies of the Kramers doublets, ε_12≠ 0 and ε_34≠ 0. This subsequently modifies the electronic transition energies as
Δ_1=Δ-γ B, Δ_2=Δ+γ B,
where γ=μ^el_ex-μ^el_gs depends on the magnetic moment of the ground- and excited-state doublets. Lifting the degeneracy of the ground-state doublet leads to asymmetric populations of the ground-state energy levels, f_12≠ 0.
Accordingly, the secular equation, Det(𝐃^-1(ω))=0 for 𝐃^-1 given by Eq. (<ref>) becomes:
(ω^2-ω_0^2)(ω^2-Δ^2)-2g̃^2f_0ω_0Δ+2 ω(Bγ (ω^2-ω^2_0)+g̃^2ω_0f_21)+γ B(γ B (ω^2-ω^2_0)+2g̃^2ω_0f_21)=0
(ω^2-ω_0^2)(ω^2-Δ^2)-2g̃^2f_0ω_0Δ-2 ω(Bγ (ω^2-ω^2_0)+g̃^2ω_0f_21)+γ B(γ B (ω^2-ω^2_0)+2g̃^2ω_0f_21)=0
These two equations are not equivalent and there is a term linear in ω that indicates a frequency splitting of phonon and electronic excitations. Given that the electron-phonon coupling is weak and the electronic excitations are off-resonant from phonons, we can assume that phonon energies are modified only slightly and have the following form
w_ph^±=Ω_ph(1∓η),
w_el^±=Ω_el(1∓ϵ).
For the case of a paramagnetic system, the population difference is given by
f_21 = -tanh(μ^el_gs B/k_BT).
where μ^el_gs is the magnetic moment of ground state manifold.
We can plug the above equations into Eq. (<ref>) and Eq. (<ref>) in order to obtain
Ω_phη = γ B(Ω_+^2-ω^2_0)+g̃^2ω_0 f_21/Ω_ph^2-Ω_el^2+γ^2B^2 = γ B(Ω_+^2-ω^2_0)+g̃^2ω_0tanh(μ^el_gsB/k_BT)/√((ω_0^2-Δ^2)^2+8g̃^2f_0ω_0Δ)+γ^2B^2.
For the off-resonant case, we can assume |Δ-ω_0|≫γ B and therefore neglect the linear B term in the numerator and the quadratic one in the denominator. The off-resonant case is a reasonable assumption, as γ B∼ 0.5 meV in strong magnetic fields of B=10 T, whereas often |Δ-ω_0|>10 meV. As a result, the splitting of the phonon frequencies can be written as
ω_ph^+-ω_ph^-/ω_ph(B=0)≈2g̃^2/√((ω_0^2-Δ^2)^2+8g̃^2f_0ω_0Δ)tanh(μ^el_gsB/k_BT),
which retrieves the early result by Thalmeier and Fulde <cit.>.
§ ORBIT-LATTICE COUPLING OF THE E_2G MODES IN CECL_3
As discussed in main text, the phonon lowers the symmetry around the magnetic ion and for the lattice distortion induced by E_1g phonon, the first order term for the change in Coulomb potential is given by:
V(E_1g(a)) =[-0.06 xz+0.16 yz]Q_a eV/Å^3√(amu),
V(E_1g(b)) =[0.16 xz+0.06 yz]Q_b eV/Å^3√(amu).
Now, we can express xz=r^2sinθcosθcosϕ and yz=r^2sinθcosθcosϕ in spherical coordinates. The electronic states on Ce^3+ ion can be written in terms of |L=3,m=m_l⟩ which have wavefunction ⟨ r|L=3,m=m_l ⟩ =R (r) Y_3^m_l(θ,ϕ). This allows us to calculate the matrix elements between different 4f states and the only non-zero terms are given by:
⟨ m=± 3|xz|m=± 2⟩=∓⟨ r^2⟩1/3√(6)
⟨ m=± 2|xz|m=± 1⟩=∓⟨ r^2⟩1/3√(10)
⟨ m=± 1|xz|m=± 0⟩=∓⟨ r^2⟩1/3√(75)
⟨± m=3|yz|m= ± 2⟩= ⟨ r^2⟩i/3√(6)
⟨± m=2|yz|m=± 1⟩=⟨ r^2⟩i/3√(10)
⟨ m= ± 1|yz|m=0⟩= ⟨ r^2⟩i/3√(75)
Using these values for states given in Eq. (<ref>)-(<ref>), we get:
H_1(xz) =-2/7√(5)⟨ r^2⟩[ |5/2,±5/2> |5/2,±3/2>; |5/2±5/2> 0 ± 1; |5/2,±3/2> ± 1 0 ],
H_1(yz) =2/7√(5)⟨ r^2⟩[ |5/2,±5/2> |5/2,±3/2>; |5/2,±5/2> 0 i; |5/2,±3/2> -i 0 ].
Next, we use the same microscopic model to calculate the Zeeman effect for other phonons as well. Here, we consider E_2g^1 (12 meV) and E_2g^2 (21.5 meV) phonons of CeCl_3. The phonon eigenvectors are obtained from density functional theory <cit.>. As mentioned in the main text (Eq. <ref>), these lattice displacements perturb the CEF around the Ce^3+ ions, and the resulting modifications can be described as
V(E_2g^1(a)) =[-0.05 xy-0.007 (x^2-y^2)] Q_a eV/(Å^2√(amu)),
V(E_2g^1(b)) =[0.014 xy-0.025 (x^2-y^2)] Q_b eV/(Å^2√(amu)),
V(E_2g^2(a)) =[0.08 xy+0.01 (x^2-y^2)] Q_a eV/(Å^2√(amu)),
V(E_2g^2(b)) =[-0.02 xy+0.04 (x^2-y^2)] Q_b eV/(Å^2√(amu)),
where Q_a,b is the amplitude of the phonon mode. Using spherical harmonics, we express this perturbation in the basis of electronic states,
using:
⟨ m=± 3|xy|m=± 1⟩= ±⟨ r^2⟩i/3√(15)
⟨ m=± 2|xy|m=± 0⟩= ±⟨ r^2⟩√(2)i/3√(15)
⟨ m= 1|xy|m=-1⟩= ±⟨ r^2⟩2i/15
⟨ m=± 3|x^2-y^2|m=± 1⟩= ⟨ r^2⟩2/3√(15)
⟨ m=± 2|x^2-y^2|m=± 0⟩= ⟨ r^2⟩√(8)/3√(15)
⟨ m= 1|x^2-y^2|m=-1⟩= ⟨ r^2⟩4/15
which gives:
H_1(x^2-y^2) =-2√(2)/7√(5)⟨ r^2⟩[ |5/2,5/2> |5/2,1/2>; |5/2,5/2> 0 1; |5/2,3/2> 1 0 ],
H_1(x^2-y^2) =-2√(2)/7√(5)⟨ r^2⟩[ |5/2,-5/2> |5/2,-1/2>; |5/2,-5/2> 0 1; |5/2,-1/2> 1 0 ],
H_1(xy) =√(2)/7√(5)⟨ r^2⟩[ |5/2,5/2> |5/2,1/2>; |5/2,5/2> 0 i; |5/2,1/2> -i 0 ],
H_1(xy) =√(2)/7√(5)⟨ r^2⟩[ |5/2,-5/2> |5/2,-1/2>; |5/2,-5/2> 0 -i; |5/2,-1/2> i 0 ],
where ⟨ r^2⟩ is mean-square radius for 4f orbitals. The form of the Hamiltonian is similar to what we obtained for the E_1g mode in the main text, except for the fact that the E_2g modes couple the electronic orbitals |±5/2⟩ with |±1/2⟩. This coupling results in a significant phonon Zeeman effect and leads to chiral phonons as shown in Fig. <ref> (c).
§ ORBITAL CONFIGURATION AND ORBITAL-LATTICE COUPLING IN COTIO_3
The Co^2+ is a 3d^3 system with three unpaired spins and for a free ion, Hund's coupling dictates that the ground state manifold has L=3, S=3/2 and thus 28 degenerate states. These twenty-eight states can be obtained from linear combinations of the following seven states (m_S=+3/2):
|L=3, m_L=3⟩=210|0⟩
|L=3, m_L=2⟩=21-1|0⟩
|L=3, m_L=1⟩=1/√(5)(√(3)20-1+√(2)21-2)|0⟩
|L=3, m_L=0⟩=1/√(5)(10-1+220-2)|0⟩
|L=3, m_L=-1⟩=-1/√(5)(√(3)-201+√(2)-2-12)|0⟩
|L=3, m_L=-2⟩=--2-11|0⟩
|L=3, m_L=-3⟩=--2-10|0⟩
where j creates a d-orbital state:
j|0⟩≡|L=2,m_l=j,S=1/2,m_s=+1/2>
These seven states are split by the octahedral CEF of the ligand ions. This CEF effect can be incorporated by introducing the following term to the Hamiltonian:
H_Oh=Δ_0 (𝒫_T_2g-3/2𝒫_E_g)
where 𝒫_T_2g/E_g is the projection operator on the T_2g and E_g d orbitals. In terms of creation and annihilation operators for d orbitals, this term can be expressed as follows,
H_Oh=Δ_0∑_σ=↑,↓(xyxy+yzyz+xzxz-3/2x^2-y^2x^2-y^2-3/2z^2z^2),
which splits the seven-dimensional Hilbert space into four different manifolds. The ground-state sector is spanned by the following three states:
|1̃⟩=√(3/8)|3,1⟩+√(5/8)|3,-3⟩
|0̃⟩=-|3,0⟩
|-1̃⟩=√(3/8)|3,-1⟩+√(5/8)|3,3⟩,
which we denote by T_1g in Fig. <ref> (b),
and the other three-sectors (irrelevant for our calculations) are given by:
√(5/8)|3,1⟩-√(3/8)|3,-3⟩
√(5/8)|3,-1⟩-√(3/8)|3,3⟩,
√(1/2)|3,2⟩+√(1/2)|3,-2⟩,
and
√(1/2)|3,2⟩-√(1/2)|3,-2⟩.
As the crystal field splitting arising from the octahedral field in this case is of the order of 1 eV, we are going to focus only on the ground-state sector which can be represented as an effective angular momentum, l_eff=1, sector with l⃗=-2/3L⃗.
Next, we take into account the effect of spin-orbit coupling,
H_SO=3/2λl⃗·S⃗,
which splits this l_eff=1 manifold further into
* j_eff=5/2 manifold:
|5/2,5/2>=|m_l=1̃,m_s=3/2⟩
|5/2,3/2>=√(2/5)|m_l=0̃,m_s=3/2⟩+√(3/5)|m_l=1̃,m_s=1/2⟩
|5/2,1/2>=1/√(10)(|m_l=-1̃,m_s=3/2⟩+√(6)|m_l=0̃,m_s=1/2⟩+√(3)|m_l=1̃,m_s=-1/2⟩)
|5/2,-1/2>=1/√(10)(|m_l=1̃,m_s=-3/2⟩+√(6)|m_l=0̃,m_s=-1/2⟩+√(3)|m_l=-1̃,m_s=1/2⟩)
|5/2,-3/2>=√(2/5)|m_l=0̃,m_s=-3/2⟩+√(3/5)|m_l=-1̃,m_s=-1/2⟩
|5/2,-5/2>=|m_l=-1̃,m_s=-3/2⟩
* j_eff=3/2 manifold:
|3/2,3/2>=√(3/5)|m_l=0̃,m_s=3/2⟩-√(2/5)|m_l=1̃,m_s=1/2⟩
|3/2,1/2>=1/√(15)(√(6)|m_l=-1̃,m_s=3/2⟩+|m_l=0̃,m_s=1/2⟩-√(8)|m_l=1̃,m_s=-1/2⟩)
|3/2,-1/2>=1/√(15)(√(6)|m_l=1̃,m_s=-3/2⟩+|m_l=0̃,m_s=-1/2⟩-√(8)|m_l=-1̃,m_s=1/2⟩)
|3/2,-3/2>=√(3/5)|m_l=0̃,m_s=-3/2⟩-√(2/5)|m_l=-1̃,m_s=-1/2⟩
* j_eff=1/2 manifold:
|1/2,1/2>=1/√(6)(√(3)|m_l=-1̃,m_s=3/2⟩-√(2)|m_l=0̃,m_s=1/2⟩+|m_l=1̃,m_s=-1/2⟩)
|1/2,-1/2>=1/√(6)(√(3)|m_l=1̃,m_s=-3/2⟩-√(2)|m_l=0̃,m_s=-1/2⟩+|m_l=-1̃,m_s=1/2⟩)
Trigonal distortion :
In CoTiO_3, the octahedral cage is trigonally distorted which reduces the Co^2+ site symmetry from O_h to C_3.
This distortion is significant and it introduces a perturbation of the form δL_z^2_tr along the z direction of the trigonal coordinate system, which splits the j_eff=3/2 and j_eff=5/2 manifolds further, according to the expectation value of j_z, where z axis is parallel (antiparallel) to c axis for site A (B) of the Co^2+ ions, as shown in Fig. <ref>. The two lower manifolds can still be characterized by m_j=±1/2 and m_j=±3/2 and are predominantly composed of j_eff=1/2 and j_eff=3/2, respectively.
Orbital lattice coupling: In order to calculate the orbital-lattice coupling for Eq. (<ref>)-(<ref>), we first express the electronic states |1/2,±1/2⟩ and |3/2,±3/2⟩ in terms of constituent d orbital states in Eq. (<ref>)-(<ref>) which in turn are expressed in the form of Spherical harmonics and this gives
H_1(xz)=r_0^21/70√(60)[ |1/2,±1/2> |3/2,±3/2>; |1/2,±1/2> 0 ±1; |3/2,±3/2> ±1 0 ]
H_1(yz)=r_0^21/70√(60)[ |1/2,±1/2> |3/2,±3/2>; |1/2,±1/2> 0 i; |3/2,±3/2> -i 0 ]
for different perturbations in the Coulomb potential.
|
http://arxiv.org/abs/2306.04071v1
|
20230607000656
|
Blockchain Technology in Higher Education Ecosystem: Unraveling the Good, Bad, and Ugly
|
[
"Sharaban Tahora",
"Bilash Saha",
"Nazmus Sakib",
"Hossain Shahriar",
"Hisham Haddad"
] |
cs.CY
|
[
"cs.CY",
"cs.CR",
"cs.DB"
] |
1]Sharaban Tahora
1]Bilash Saha
1,*]Nazmus Sakib
1]Hossain Shahriar
2]Hisham Haddad
[1]Department of Information Technology, Kennesaw State University, Georgia, United States
[2]Department of Computer Science, Kennesaw State University, Georgia, United States
[*][email protected]
Blockchain Technology in Higher Education Ecosystem: Unraveling the Good, Bad, and Ugly
[
July 31, 2023
==========================================================================================
The higher education management systems first identified and realized the trap of pitting innovation against privacy while first addressing COVID-19 social isolation challenges in 2020. In the age of data sprawl, we observe the situation has been exacerbating since then. Integrating blockchain technology has the potential to address the recent and emerging challenges in the higher education management system. This paper unravels the Good (scopes and benefits), Bad (limitations), and Ugly (challenges and trade-offs) of blockchain technology integration in the higher education management paradigm in the existing landscape. Our study adopts both qualitative and quantitative approaches to explore the experiences of educators, researchers, students, and other stakeholders and fully understand the blockchain's potential and contextual challenges. Our findings will envision an efficient, secure, and transparent higher education management system and help shape the debate (and trade-offs) pertaining to the recent shift in relevant business and management climate and regulatory sentiment.
Blockchain, Education, Decentralization, Digital Credentials, Data Privacy and Security.
§ INTRODUCTION
Blockchain technology is increasingly seen as a disruptive force in the e-learning market, such as areas of digital credentials, student records management, and online payment systems. The e-learning market is expected to reach $376 billion by 2026, with a growing demand for Massive Open Online Courses (MOOCs) which are expected to be worth $25.33 billion by 2025 <cit.>. MarketsandMarkets estimates that the global blockchain in education market was valued at $59.7 million in 2019 and is expected to reach $1,381.9 million by 2023 with a Compound Annual Growth Rate (CAGR) of 84.3% <cit.>. EdTechXGlobal and HolonIQ found that most universities are exploring the use of blockchain in education management system, with a focus on digital credentialing and student data management <cit.>. Massachusetts Institute of Technology (MIT) established its Blockchain Education Alliance to explore educational applications of blockchain in 2018 <cit.>, and Southern New Hampshire University launched a blockchain-based platform for verifying and sharing student credentials in 2019 <cit.>. The Open University in the UK partnered with blockchain company Learning Machine to launch a pilot program for secure digital credentialing in 2020 <cit.>, and the Chinese city of Hangzhou launched a blockchain-based "digital education passport" to store and verify students' academic records and achievements <cit.>. The University of Nicosia in Cyprus became the first accredited university to offer a Master's degree in Digital Currency based on blockchain technology in 2021 <cit.>, and the University of Bahrain began using blockchain to issue digital degrees the same year <cit.>. The blockchain-based platform "Credly" is being used by many educational institutions to issue digital credentials such as certificates, badges, and micro-credentials <cit.>. These statistics demonstrate the proliferating implementation of blockchain technology in higher education as a mechanism to amplify the transferability, validity, and credibility of academic achievements and credentials, thereby emphasizing the exigency for additional empirical investigations to explicate its boundaries and streamline its assimilation.
The higher education management system has been criticized for its inefficiencies and security issues, including an inadequate response to the COVID-19 pandemic, limited resources and outdated infrastructure, limited storage, and privacy concerns. These challenges have led to difficulties in managing and accessing educational data, as well as concerns about data protection and cyber attacks. Blockchain technology can enhance the higher education management system by providing a secure, tamper-proof record of student achievements and reducing fraud <cit.>, <cit.>. The technology's immutability and smart contracts automate processes, creating an efficient and permanent record without intermediaries <cit.>, <cit.>. Decentralized architecture eliminates the need for a central authority, creating a secure and tamper-resistant system <cit.>, <cit.>, <cit.>. Popular frameworks for instance the blockchain manifesto framework <cit.>, Student-Centered iLearning Blockchain framework (SCi-B) <cit.>, and the use of root Merkle methods <cit.> can improve learning and activity tracking in higher educational management. Despite recognizing the benefits of blockchain technology, educational institutions face challenges implementing it due to limited expertise and technology limitations<cit.>, <cit.>.
The aim of this research is to evaluate the potential of blockchain technology in enhancing the efficiency and security of the higher education management system through the use of its decentralized architecture, immutability, and smart contract functionality. In order to achieve this objective, we addressed the following research questions:
* RQ1: What are the scopes of blockchain technology in improving the efficiency, security, and transparency of the higher education management system?
* RQ2: What are the potential benefits of implementing blockchain technology in the higher education management sector and how can they be measured and evaluated?
* RQ3: What are the limitations and challenges of implementing blockchain technology in the higher education management sector and how can they be overcome?
To answer these questions, We reviewed 115 articles and conducted a comprehensive examination of the current higher education management system, identifying numerous challenges such as issues with data integrity, access, and student identification due to centralized databases. We explored the potential of blockchain technology as a transformative tool to elevate the higher education management system, utilizing decentralized storage of student data, improving data access and integrity, and verifying student identities and achievements through blockchain-based credentials to combat fraud and promote transparency. Additionally, we identified several future research directions in blockchain technology in higher education management, including technical and legal implications, best practices for implementation, and the optimal technology to achieve desired impact and advancements.
Overall, this article aims to provide a comprehensive overview of the impact of blockchain technology on the higher education management system. It focuses on the opportunities and challenges posed by blockchain, and the implications of its implementation. The paper will examine the different aspects of blockchain and its application in higher education management system and address how it can be used to enhance efficiency and security.
§ METHODS
We bring to the forefront the scoping criteria, systematic literature search, and data analysis process utilized in our study to underscore their significance and emphasize these aspects of our research.
§.§ Scoping criteria
In the context of the higher education management system, the scoping criteria for implementing blockchain technology refer to the specific parameters and considerations that need to be taken into account when developing a blockchain-based solution for educational purposes. These criteria include the type of data used and stored on the blockchain, security and privacy measures, scalability and performance, regulatory and compliance requirements, and user needs <cit.>, <cit.>, <cit.>. In our research, we considered the challenges in documentation within the higher education management system and investigated the potential of blockchain technology to provide a solution. For instance, when developing a blockchain-based algorithm for secure and transparent record-keeping of student grades and credentials <cit.>, we emphasized the importance of considering scoping criteria such as the type of data used and stored on the blockchain, including student names, grades, and other qualifications, as well as information related to the classes and schools they have attended <cit.>. Security and privacy were also crucial considerations, and we ensured that the algorithm could protect student data from unauthorized access and share it only with authorized parties, such as educational institutions and potential employers <cit.>. Additionally, scalability and performance were taken into account to ensure that the algorithm could handle a large number of transactions and users, making the system fast and efficient <cit.>, <cit.>. Regulatory and compliance requirements were also considered to ensure that the algorithm met any relevant data privacy regulations <cit.>.
§.§ Systematic Literature Search
A systematic literature review in the realm of blockchain in higher education management system was executed through various techniques. We commenced by identifying pertinent databases, namely JSTOR, Google Scholar, and IEEE Xplore, and then formulated a series of specific search terms that included "blockchain in education", "blockchain and e-learning", "blockchain and education", "blockchain and online learning", "blockchain and degree verification", "Blockchain and credential verification", "Blockchain and education supply chain management", "Blockchain and academic integrity", "Blockchain and educational data management", "Blockchain and smart contract in education", "Blockchain and educational administration", "Blockchain and student data privacy", "Blockchain and e-portfolio", "Blockchain and digital identity management in education", "Blockchain and educational credentialing", "Blockchain in online education", "Blockchain-based learning management systems". The literature search started by searching databases and eliminating irrelevant papers through abstract and title reviews. Key data such as authors, titles, publication date, and findings were then extracted from relevant papers and analyzed to synthesize common themes, trends, and gaps in the literature. The PRISMA flowchart utilized for study selection is shown in Figure <ref>.
§.§ Data Analysis Process
To interpret our findings, we employed a systematic qualitative examination to address our overarching research questions. Specifically, for RQ1 we employed a case study methodology <cit.> to investigate specific instances of blockchain-based student information management systems and their implementation in real-world scenarios. For RQ2, we utilized discourse analysis <cit.>, <cit.> to examine the perspectives and views of diverse stakeholders such as educators, students, and employers on the use of blockchain-based credentials. And for RQ3, we adopted an ethnographic approach <cit.> to understand the usage of decentralized learning platforms by educators and students and the advantages and drawbacks of this method.
§ RESULTS
This section presents our literature review's key findings, including an analysis of the current higher education management system's inadequacies, an exploration of blockchain technology's potential benefits, and an examination of the challenges and strategies for implementing it in the sector.
§.§ Scoping Blockchain in Higher Education Management System (RQ1)
The implementation of blockchain technology in the higher education management system has the potential to transform the way information is stored, managed, and transmitted. According to the authors of the paper <cit.>, this technology provides a secure, decentralized ledger that is resistant to tampering and hacking, making it an attractive solution for the higher education management sector. Blockchain technology has the potential to bring numerous benefits to the higher education management system, including improved efficiency, security, and transparency. We employed a case study methodology to investigate the use of blockchain-based student information management systems. Table <ref> presents an overview of the scope, complexities, benefits, and implementation challenges of this technology in real-world scenarios. These findings allowed us to provide a comprehensive and detailed analysis of the topic, and to better understand the prospects of blockchain technology in the higher education management sector <cit.>.
Blockchain technology has the potential to address many of the concerns related to privacy and security in the higher education management system. Al Harthy et al. <cit.> explained that blockchain technology can provide a secure and transparent way to store data through encryption techniques, which can ensure that personal data remains private and only accessible to authorized parties. The authors in paper <cit.> discussed the use of smart contracts to automate various administrative processes in the higher education management systems, such as the issuance of certificates, academic transcripts, and degree verification. Blockchain can also provide a secure and decentralized identity management system <cit.> that is tamper-proof and can prevent identity theft. Furthermore, blockchain can eliminate the need for a centralized system by allowing multiple parties to have access to the same data at the same time without a central authority <cit.>. This can help to reduce the risk of a single point of failure and increase transparency. Additionally, blockchain technology provides a transparent and immutable record of all transactions, which can address the issue of lack of data regulation <cit.>. Figure <ref> effectively demonstrates how blockchain can address existing challenges related to privacy and security.
Blockchain technology can significantly enhance the standardization of the higher education management system by providing a secure, decentralized, and transparent platform for managing educational records and seamless communication between different educational institutions in a common standard. Interoperability and compatibility of different systems, as well as accessibility and regulation, are critical factors to be addressed to make the technology more widespread among educational institutions <cit.>. For example, in the healthcare sector, the Health Insurance Portability and Accountability Act (HIPAA) sets standards for the privacy and security of health information <cit.>, <cit.>. Similarly, in the higher education management sector, the use of blockchain's decentralized features can help in the standardization of processes in Learning Content Management Systems (LCMS) <cit.>, Learning Record Stores (LRS) <cit.>, and Learning Resource Metadata <cit.>.
The integration of blockchain technology with the higher education management system can provide a transparent, secure, and compliant platform, overcoming the regulatory challenges faced by the current system <cit.>. It can provide a secure and transparent platform for data management, overcoming the issues of data disregard and data mismanagement. With the use of smart contracts, blockchain can also automate and enforce regulations, providing adequate smart contract management and ensuring compliance. Additionally, blockchain can provide a decentralized and immutable record of all transactions, making auditing <cit.> more efficient and transparent, and overcoming the issue of inadequate auditing. Moreover, blockchain technology can enhance the cybersecurity <cit.> of the higher education management system by providing an immutable and secure platform for data storage and communication. By eliminating the need for intermediaries, blockchain can reduce legal vagueness and provide a standardized platform for integration.
Mohammad et al. <cit.> shed light on the significance of various use cases for the real-world implementation of blockchain technology in higher education management systems, such as learning analytics, adaptive learning, personalized learning, and collaborative learning. These use cases highlight the potential of blockchain to provide a secure, decentralized, and transparent platform for facilitating collaborations, automating administrative processes and improving interoperability, thereby enhancing the trust and adoption of the system. Integrating blockchain technology in higher education management can potentially reduce costs and provide a clear ROI <cit.>. Decentralization reduces infrastructure costs <cit.>, while automation reduces development, training, and maintenance costs. The transparent nature of blockchain reduces auditing and compliance costs. Streamlined processes and reduced costs provide a clear ROI over time.
Blockchain integration can improve scalability in the higher education management system by providing a decentralized network that can handle large volumes of data, reducing latency and improving throughput. It increases data storage capacity and reduces complexity, making it easier to manage and process data. Blockchain's consensus mechanism eliminates intermediaries and reduces transactional overhead, potentially enhancing system efficiency. Blockchain technology can bring many benefits to the higher education management system, including improved security, transparency, and efficiency. It can also address concerns related to privacy, standardization, regulatory compliance, and cost while enhancing scalability.
§.§ Assessing Potential Benefits and Measuring Impact on Efficiency, Security, and Transparency (RQ2)
Blockchain technology exhibits its capacity to improve various aspects of the higher education management system, including records management, authentication, security, privacy, and access management. Table <ref> shows the feature-wise advantages of the implementation of blockchain in the higher education management system and available supported blockchain platforms. One of the key benefits is the provision of immutable records. The utilization of different types of blockchain-based technology for instance Ethereum, ConsenSys Quorum <cit.>, EOSIO <cit.>, Avalanche <cit.>, Cardano <cit.>, Hyperledger Fabric <cit.>, R3 Corda <cit.>, Solana <cit.>, Tezos <cit.>, Polkadot <cit.> guarantees that records cannot be altered, thus providing a secure and dependable method of storing and validating student information <cit.>. The coverage of different higher education system features by various blockchain technologies can be visualized in Figure <ref>. The use of blockchain technology can improve accreditation and certification processes by guaranteeing the genuineness and accuracy of issued credentials. The elimination of a central authority due to the decentralized nature of blockchain technology increases transparency and trustworthiness. By enabling interoperable solutions such as Ethereum-based student record management <cit.>, blockchain technology can enhance record sharing across various platforms, improving the efficiency and effectiveness of record management. This is of utmost importance for recognizing prior learning and lifelong learning.
Blockchain technology can provide cost-effective solutions in the management of higher education, by reducing administrative costs. The utilization of Hyperledger blockchain for learning management systems has been demonstrated to ensure secure storage of educational content while being cost-effective, as evidenced by studies conducted by the authors of papers <cit.> and <cit.>. This can be particularly beneficial for educational institutions operating with limited resources and facing budget constraints.
Skiba et al. <cit.> have demonstrated the potential of blockchain technology to offer secure and transparent transactions in higher education management systems through the utilization of Ripple blockchain for payment systems. This can enhance the trustworthiness and efficiency of financial transactions in the sector, especially for tuition fees, scholarships, and student loan management <cit.>. Further benefits of blockchain technology in this domain includes security and transparency, interoperability, and cost-effectiveness of the solutions, as well as the overall improvement in the learning experience. Another significant aspect of higher education management that Palmisano et al. <cit.> explored is the utilization of blockchain technology for implementing anti-plagiarism measures. This can promote original thought and creativity while ensuring tamper-proof tracking and verification of written work <cit.>.
Furthermore, Blockchain enables issuance and verification of micro-credentials, useful for skills-based learning and informal recognition <cit.>. Alam et al. <cit.> accentuated that by utilizing Ethereum, the credentials can be secured and made transparent, thereby improving the credibility of the system. It also enables secure, decentralized tracking and recording of student progress in online education systems <cit.>.
Overall, blockchain technology offers a range of advantages for the higher education management sector, including secure record-keeping, improved authentication and access management, and cost-effective solutions, as evidenced by various studies. Additionally, the technology provides transparent and trustworthy transactions, enhanced accreditation processes, and anti-plagiarism measures, thereby improving the overall learning experience.
§.§ Challenges and Solutions of Blockchain in Higher Education Management system (RQ3)
The implementation of blockchain technology in the educational sector is still in its early stages, but it has the potential to fundamentally revolutionize how higher education is managed and delivered <cit.>. Despite its advantages, implementing blockchain technology in the higher education management sector faces a number of challenges that must be surmounted in order to realize all of its potential <cit.>. Figure <ref> illustrates the research's emphasis on the difficulties and restrictions of integrating blockchain technology into higher education management systems.
The inherent technological complexity of blockchain technology, according to Yumna et al. <cit.>, is one of the main barriers to its effective integration in the higher education management sector. For institutions and educators, it may be challenging to comprehend and use blockchain technology effectively due to its complexity. For the successful implementation of blockchain systems in education, Anwar et al. <cit.> highlighted the potential challenge of technical knowledge gaps and challenges in navigating the rapidly changing blockchain technology landscape.
A cost-benefit analysis and feasibility study of integrating blockchain in educational institutions are required to get around this problem. The most effective and efficient ways to implement blockchain technology in the higher education management sector should be determined by these studies, which should also look at the technical expertise and training requirements of educators and institutions <cit.>. Additionally, by giving educators and institutions the abilities and knowledge necessary to successfully integrate blockchain technology into their operations, pedagogical training and professional development programs for institutions and educators can help to alleviate this challenge <cit.>.
Fedorova et al. <cit.>, <cit.>, Raimundo et al. <cit.>, and Ghaffar et al. <cit.>, have identified a significant barrier to the adoption of blockchain technology in higher education management systems as limited interoperability. This difficulty makes it difficult for blockchain systems to be integrated with current infrastructure, which results in a lack of standardization and compliance. The authors suggest a number of approaches to resolve this problem, including technical compatibility evaluations, standardization initiatives, regulatory compliance analysis, application of standardized protocols, cooperation with governmental organizations, and integration of interoperability solutions with current systems. By creating open source technologies that work with a variety of platforms and systems, the Hyperledger project of the Linux Foundation seeks to address the issue of the blockchain sector's poor interoperability. This collaboration between multiple organizations is aimed at establishing standardized protocols and interfaces, leading to greater compatibility in the industry <cit.>, <cit.>, <cit.>.
Scalability is another issue with blockchain technology in the higher education management field. Although blockchain technology has the capacity to support numerous users and transactions, scaling it to meet the needs of a user base that is expanding quickly can be challenging <cit.>. Due to this, educational institutions may find it challenging to take full advantage of the potential advantages of blockchain technology, such as improved efficiency and lower costs.
Institutions must carefully consider the scalability of the blockchain technology they choose to implement in order to overcome this difficulty. This entails carrying out technical analyses of the technology's scalability and assessing its capacity to handle the anticipated volume of transactions and data <cit.>. Additionally, organizations ought to think about implementing alternative blockchain solutions that can help the technology scale, like sharding or off-chain transactions <cit.>.
The final challenge for blockchain technology in higher education management systems is security. There are still dangers associated with using blockchain technology, such as hacking, data breaches, and other security threats <cit.>, despite its many benefits, such as secure data management and decentralized systems. Sensitive data may be lost as a result, and educational institutions' reputations may suffer. The adoption of blockchain technology in student financial aid poses a security challenge because it increases the likelihood of data breaches. To mitigate this, robust security measures such as encryption and multi-factor authentication must be implemented to protect sensitive information and ensure secure operations.<cit.>, <cit.>. Table <ref> provided a comprehensive overview of the multifaceted challenges that arise in the implementation of blockchain technology in higher education management systems, and also offered a range of potential solutions to overcome these limitations.
§.§ Our Findings and Shaping Relevant Debates
The results of the analysis suggest that blockchain technology has significant potential to address privacy and security concerns in higher education management systems. According to the visualization in Figure <ref>, privacy and security emerges as the foremost domain for implementing blockchain technology, underscoring its potential to address critical challenges in safeguarding sensitive data and preventing unauthorized access. Based on Figure <ref>, the use of Ethereum is recommended for implementing a comprehensive higher education management system, given its potential applicability across all areas of such a system. Figure <ref> points out the scarcity of research on regulatory and scalability considerations in implementing blockchain in higher education management systems, emphasizing the need for future research to prioritize these critical areas. As the technology continues to evolve and mature, it is important to continue exploring its potential to improve standardization and enhance the overall quality of higher education management.
§ CONCLUSION
Blockchain technology offers a secure and immutable way to store and access educational credentials and records, with the potential to revolutionize the higher education management sector. By reducing the risk of fraud and providing a tamper-proof record, blockchain technology can improve the efficiency of the higher education management system. Despite challenges such as lack of expertise and slow processes, the research shows the benefits of blockchain in higher education management system and the need to address these challenges.
§ ACKNOWLEDGMENT
This work is partially supported by National Science Foundation (NSF) under awards #2100115.
IEEEtranN
|
http://arxiv.org/abs/2306.03168v1
|
20230605182223
|
Composition and Deformance: Measuring Imageability with a Text-to-Image Model
|
[
"Si Wu",
"David A. Smith"
] |
cs.CL
|
[
"cs.CL",
"cs.CV"
] |
B-rep Matching for Collaborating Across CAD Systems
Adriana Schulz
July 31, 2023
===================================================
Although psycholinguists and psychologists have long studied the tendency of linguistic strings to evoke mental images in hearers or readers, most computational studies have applied this concept of imageability only to isolated words. Using recent developments in text-to-image generation models, such as DALL•E mini, we propose computational methods that use generated images to measure the imageability of both single English words and connected text. We sample text prompts for image generation from three corpora: human-generated image captions, news article sentences, and poem lines. We subject these prompts to different deformances to examine the model’s ability to detect changes in imageability caused by compositional change. We find high correlation between the proposed computational measures of imageability and human judgments of individual words. We also find the proposed measures more consistently respond to changes in compositionality than baseline approaches. We discuss possible effects of model training and implications for the study of compositionality in text-to-image models.[Our scripts are available at <https://github.com/swsiwu/composition_and_deformance>]
§ INTRODUCTION
Did you ever read one of her Poems backward, because the plunge from the front overturned you? — Emily Dickinson <cit.>
Imageability is the capacity of a linguistic string to elicit imagery. Humans can identify highly imageable words, such as “banana”, “beach”, “sunset”, and words with low imageability, such as “criterion”, “actuality”, “gratitude”; however, it’s difficult to measure imageability computationally. Psycholinguists and psychologists have conducted interviews with humans and released databases of the human imageability ratings, such as the Medical Research Council (MRC) Psycholinguistic Database, to help researchers in their fields, as well as other fields such as linguistics and computer science, to measure these intangible attributes of verbal content <cit.>. However, conducting these interviews is costly and laborious. Volunteers had to rate hundreds and thousands of words, thus expanding these psycholinguistics databases to the size of modern Natural Language Processing (NLP) corpora such as Corpus of Contemporary American English (COCA), which has more than 60k lemmas with word frequency and part of speech tags, is unrealistic.
Furthermore, these ratings are only on isolated words. To calculate a sentence’s imageability, many applications have simply added the scores of its component words. Other work uses each word's concreteness level, which research has found highly correlated with imageability <cit.>. These methods, while they are able to roughly measure imageability, dismiss a fundamental property of a sentence: compositionality. Compositionality depends on word order as well as word choice. Sentences with the same component words but with different word order vary not only in their syntax and semantics, but also their intensity and construction of the imagery, e.g. the famous example: “the dog bit the man” vs. “the man bit the dog”. Previous bag-of-word approaches such as <cit.> would consider a sentence and its backward version as having the same imageability, but to human readers, the level of imageability has significantly altered.
In this paper, we investigate a new computational approach to measure imageability using text-to-image models such as DALL•E mini. We propose two methods to measure the imageability level of both individual words and connected text by generating images with a text-to-image model. We test our methods with both isolated words from the MRC database and connected text from poems, image captions, and news articles, and compare our result with previous bag-of-words methods such as <cit.> and <cit.>.
We propose, firstly, measuring the average CLIP score provided by DALL•E mini and, secondly, calculating the average pairwise cosine similarity between embeddings computed by a pretrained ResNet-18 model. We find that these methods are more highly correlated to human imageability judgments of individual words than other automatic techniques proposed by <cit.>.
We further demonstrate the robustness of our proposed methods by subjecting connected text to various deformances. As suggested by the epigraph from Emily Dickinson, literary scholars design transformations of the original text to elicit more or less intense reactions from human readers <cit.> and help them calibrate their interpretations of literary works. This approach is similar to how contrastive training might be used for models such as word2vec or BERT.
We compare our computational imageability measurements with human judgment collected from Amazon Mechanical Turk (AMT) and found that, on MRC isolated words, our methods have reasonable correlations with human judgment, but on connected text, the correlations vary among different types of connected text.
§ RELATED WORK
Paivio introduced the idea of imageability and defined imageability as “the ease/difficulty with which words arose a sensory experience” <cit.>. Although imageability is associated with many modalities, some researchers have found that visual modality is its most prominent modality <cit.>. Imageability is also highly correlated with concreteness <cit.>, and concreteness has also been found to be most related to visual modality <cit.>. However, some researchers have found their relation to be more complex: words with high imageability and concreteness “evoke sensations connected to the perception of the objects they denote”, words with high imageability and low concreteness “evoke sensations connected to affective arousal” <cit.>. An example for the latter is “anger”, which is highly imageable since most of us have the experience of being angry, but “anger” itself is an abstract word. Since imageability and concreteness are highly correlated, in this paper, we will compare some works using concreteness to measure word imageability, but we agree that these two attributes should ideally be disentangled <cit.>.
The imageability rating of an isolated word in psycholinguistics research is usually derived from interviewing human subjects: asking them how imageable a word or a concept is on a 7-point Likert scale <cit.>. Due to the cost of this procedure, the most popular dataset, MRC Psycholinguistics Database, combines three different sources and, even so, has a limited vocabulary size of 9240.
Others have attempted to expand MRC imageability ratings using synonyms and hyponyms identified in WordNet <cit.>; <cit.> made a small expansion of 3000 words but only on disyllabic words.
Concreteness ratings in the MRC database were obtained in the same way as imageability ratings, although recently <cit.>, using Amazon Mechanical Turk, was able to expand the vocabulary to 37,058 words.
A concrete idea is assumed to have more shared representation than an abstract idea. <cit.> estimate the concreteness of a word in a database by measuring how clustered a word’s associated images according to the image embeddings provided by ResNet-18. <cit.> estimate imageability using more explicit visual features, such as color distributions, local and global gradient descriptions, and high-level features, such as image theme, content, and composition. However, this approach is supervised and requires a large amount of data to train.
The above works are all on isolated words. Our paper aims to measure a sentence’s imageability beyond bag-of-words methods, where the latter is insensitive to compositional change and imagery loss/gain. We will compare both to the work of <cit.>, who measure imageability with bag-of-words models, and of <cit.>, who estimate word concreteness in an unsupervised manner. We will demonstrate our methods’ advantage via correlation with single-word human judgments from MRC (Table <ref>). For connected text, we will inspect the measurement change with respect to human expectation (Table <ref>, Fig <ref>).
§ DATASETS
§.§ Connected text datasets
The Poetry dataset consists of 355 English poems written by different types of poets: imagists, contemporary poets, contemporary amateur poets, and 19th-century poets. They were collected by <cit.> from different poetry websites and publications: Des Imagistes (1914), Some Imagist Poets (1915), Contemporary American Poetry (Poulin and Waters, 2006), Amateur Writing (website), and Famous Poets and Poems (website). In their paper, they use various linguistic and psycholinguistic attributes as features for identifying different poets and poem types. In this paper, however, we do not focus on poem-level classification. The dataset was provided by the authors of <cit.> for research purposes.
Conceptual 12M (CC12M)[Available to download at <https://github.com/google-research-datasets/conceptual-12m>] <cit.> is a dataset of 12 million image-caption pairs, designed for vision-and-language pre-training. We randomly sampled 5000 captions from the dataset. In the 12M captions, real names are replaced with , and some captions contain hashtags. We only use captions with no or #.
Cornell Newsroom Dataset[Available to download after accepting the data licensing terms <https://lil.nlp.cornell.edu/newsroom/download/index.html>] <cit.> is a summarization dataset of 1.3 million articles from 38 major English-language news publications. We extract sentences using nltk "sent_tokenize", then randomly sample 5000 sentences of 10–30 words from the training set original news articles.
§.§ Psycholinguistics databases
MRC Psycholinguistics Database contains 150,837 words and their linguistic and psycholinguistic attributes including imageability, concreteness, familiarity, age of acquisition, and Brown word frequency <cit.>. It was originally published by <cit.> and made machine-usable by Wilson. The later version also added new entries and made corrections to the previous one. Out of 150,837 words, only 9240 entries have imageability ratings, and there are only 4828 unique words with imageability ratings. The imageability ratings range between 100 and 700. Duplicated imageability word entries are all agreeing on the imageability rating but vary in other attributes, such as different word types (noun, adjective, verb, etc.) and having "N/A" or empty entries. This is possibly because the database was a concatenation of 3 different databases. We will denote this imageability rating as imageability in tables and figures.
Brysbaert et al. Concreteness Human Ratings contains 37,058 English words and 2896 two-word expressions that were crowd-sourced from over 4000 participants on AMT. All lemmas in the dataset were known by at least 85% of the participants. Concreteness is defined as the ability to have immediate experience through senses or actions and is more experience-based, as opposed to abstractness, which can't be experienced through senses or actions. It's also more language-based. Raters were asked to rate a word on a 5-point scale, where 5 is the most concrete and 1 is more abstract. The Brysbaert ratings are also highly correlated with the MRC Psycholinguistics Database's concreteness ratings, with r=0.919. In the following experiments and analysis, we will denote this concreteness rating as concreteness.
§ METHODS
§.§ Model
We use DALL•E mini <cit.>[<https://github.com/borisdayma/dalle-mini>] as our text-to-image model. DALL•E mini is developed by developers and researchers as an open-source alternative to the original DALL•E developed by OpenAI. It's trained with 15 million webcrawled images and 0.4 billion parameters comparing to the original DALL•E, which is a 12-billion parameter autoregressive transformer trained on 250 million image-text pairs. The image outputs of DALL•E model are ranked by their Contrastive Language-Image Pre-training (CLIP) scores, a neural network that learns to correlate image and text <cit.>. Like similarity scores, CLIP score has the range of [0,1]; DALL•E mini adjusts this to a percentage in [0,100].
Specifically, we are using DALL•E mini version "mini-1:v0". One of the hyperparameters for generation is temperature. Temperature acts as a threshold for the quality of the sampled images. We use a temperature of 0.85 to ensure that the sampled images are highly correlated (high CLIP score) while allowing mild visual diversity. When we did a grid search over this parameter on a small set of poems, it did not have a noticeable effect on the average CLIP scores. Lastly, a higher conditioning scale (condscale) will result in a better match to prompt but low diversity, and we decided to use a condscale of 3 (out of 10) informed by a report written by a DALL•E mini developer <cit.>.
We use 4 Tesla V100 SXM2 GPUs for this paper. For each connected text corpus and each deformance, it takes about 24 hours to generate images. For MRC vocabulary, it takes about 24 hours as well. We will release the code we use for this paper in this GitHub repository[<https://github.com/swsiwu/composition_and_deformance>].
§.§ Measurements
A human can evaluate and “feel” how imageable a text is. For example, “mom is angry at me” is not as imageable as “mom’s eyes are throwing knives at me”. A good computational measurement should be able to quantify and estimate the magnitude of imageability, and when the original text is subject to a compositional change (deformance), it should manifest the direction of change in imageability.
To first examine the text-to-image model’s ability to measure the magnitude of imageability, we will first test on isolated words from MRC and benchmark our methods against the MRC human imageability ratings as well as comparing to other bag-of-words measurements in section <ref>. Then in section <ref>, we will test on different connected text. We will alter the original text’s composition and imageability with deformances, and by doing so, we’d like to observe both the magnitude and direction of change using our methods and previous bag-of-words methods. In some deformances, bag-of-words methods will fall short since they don’t consider word order and word choice, while our methods will demonstrate both magnitude and direction of imageability change.
We will also briefly mention how word frequency is unrelated to imageability in section <ref>.
§.§ Measuring isolated word's imageability
For the isolated word experiments, our vocabulary is all the words in MRC Psycholinguistics Database that have imageability human ratings. For each word, we will have the MRC imageability rating (imageability) and the concreteness rating (concreteness) from <cit.>. Then we use DALL•E mini to generate a maximum of 16 images for each word to obtain 3 other measurements:
* The concreteness score introduced by <cit.>, where each image will only have one label which is the word we used to generate that image, and each word will have a maximum of 16 images associated with that word. We will say Hessel et al. when we refer to this score.
* Average CLIP score: our first proposed method. Each image has a CLIP score provided by DALL•E mini when it was generated. We average all generated images' CLIP scores for the target word to produce the average CLIP score. We will denote it as aveCLIP in tables and figures.
* Average pairwise image embedding cosine similarity: our second proposed method. For each generated image, we obtain its image embedding with ResNet-18, then compute the average pairwise cosine similarity score between all images for the target word. We will denote this score as imgSim in tables and figures. Mathematically, let M be the set of image embeddings, M= {𝐦_1, 𝐦_2, 𝐦_3, ...}, n = |M|, N be the unique pairs in M, and k = |N|=_n C_2.
imgSim= 1/k∑_(m_x, m_y)∈ N𝐦_𝐱·𝐦_𝐲/|| 𝐦_𝐱|| ||𝐦_𝐲||
This is to be distinguished from Hessel et al.'s method, which calculates the average size of the mutually neighboring images associated with a word and then normalizes it by a random distribution of the image data <cit.>.
We visualize these MRC word imageability ratings and their corresponding aveCLIP and imgSim in Figure <ref>, where they are colored by aveCLIP. The figure shows that words with very high average CLIP scores tend to have high imageability human ratings and high average image embedding similarity. In Figure <ref>, we plot aveCLIP vs. imgSim on the MRC words, and it shows a positive linear correlation between them.
§.§ The case of familiarity of MRC vocabulary
We use word frequency to measure familiarity. Word frequency counts are from Brown Corpus for 3979 out of 4828 MRC words. In table <ref>, we show the Pearson Correlation coefficients between all other measurements and MRC imageability ratings. The <cit.> concreteness ratings and MRC imageability ratings are highly correlated with r=0.780, then followed aveCLIP (r=0.537) and imgSim (r=0.429). Word frequency is irrelevant to imageability ratings as it shows a negative and minuscule linear correlation.
§ CONNECTED TEXT AND COMPOSITIONALITY
§.§ Preprocessing
For connected text, the prompt input is each individual caption, news sentence, with the exception that for poems we use every 2 lines (no overlaps) as a single prompt. We use 2 poem lines to ensure enough visual and semantic content for DALL•E mini to generate meaningful images. These two lines are combined with a space character since the majority of the poem lines end with a punctuation mark.
§.§ Measuring imageability
We use the same imageability measurements as the single-word experiments, with these specifications:
* Imageability rating: the imageability score for a connected text is the sum of all words' imageability human ratings found in the MRC database divided by the number of words found in the database, as did in <cit.>.
* Concreteness rating: the sum of all words' concreteness ratings found in the <cit.> database divided by the total number of words in the prompt.
* Concreteness score by <cit.>: their method was designed to estimate single word concreteness scores. To get sentence-level concreteness scores, we use the sum of the concreteness scores of all words in a sentence divided by the total number of words. Notice that the same word will have a different concreteness score under a different deformance because a word concreteness score is estimated from all its associated images, and the images are generated using DALL•E mini with deformed sentences as prompt. Also notice that a word concreteness score is not only estimated from one sentence, but all sentences that contain this word under the same type of deformance. We modify their tokenizer so that all punctuation is omitted for a cleaner output.
* Average CLIP score: we average each prompt's generated image CLIP scores, then divide it by the total number of images.
* Average pairwise image embedding cosine similarity: we calculate the average pairwise image embedding cosine similarity among the images given a prompt.
§.§ Deformances
The above measurements are repeated for each deformance, and we evaluate the percent change for each measurement. We use percent change instead of difference since these scores are on different scales. A good measurement should show the change in imagery caused by the change of composition. The traditional bag-of-words methods as displayed in Table <ref> cannot display this change if the component words remain the same; Methods using DALL•E-generated images such as aveCLIP, imgSim, and Hessel et al. are able to detect changes in both word order and word choice. Hessel et al.'s method, however, does not always correctly show the direction of imagery change.
As defined by <cit.>, a deformance is designed to change text composition by altering its word order and/or word choice. A deformance disturbs the linguistic structure of a sentence, hence it changes not only the surface of the sentence: syntax, word order, and composition of the sentence, but also the underlying information of the sentence: semantics and pragmatics. We perform 4 different types of deformances on each connected text to examine the model’s ability to measure compositional change compared to the bag-of-words methods. The deformances are backward, permuted, just nouns, and replaced nouns, and we provide an example and the elaborated description for each deformance in Table <ref>.
The backward and just nouns deformances appear in <cit.> Deformance and Interpretation, in which they analyze different poetry reading practices. The backward deformance alters the word order: even though having the same set of words, it becomes less intelligible. Permuted is similar to backward: the dependency structure is disturbed, and it becomes chaotic nonsense. Just nouns strips off everything but nouns that are more likely to be imageable, but since there’s no linguistic structure between them, the sentence is less specific in its imagery. Unlike backward and permuted, replaced nouns preserves the sentence structure and bag-of-words imageability ratings but alters the imagery via syntax. Backward, permuted, and replaced nouns are experiments where imageability scores remain the same in the bag-of-words approach, but other methods will manifest the change in imagery.
For replaced nouns, we ignore plural nouns not in the MRC vocabulary. Nouns in both just nouns and replaced nouns deformances are identified by the NLTK tagger.
By construction, all these deformances except just nouns cause no change in bag-of-words imageability and concreteness measures. We would expect that applying backward and permuted deformances to a text would make them less imageable, since the word order becomes less comprehensible, and that is precisely what we see with the aveCLIP and (with one exception) the imgSim measures. In comparison, the <cit.> metric mostly rates the output of those deformances as far more imageable.
§ HUMAN JUDGMENT
We recruit workers on Amazon Mechanical Turk (AMT) to rate the imageability of randomly sampled MRC words, poem lines, captions, and news sentences. Workers were informed that they would be participating in a psycholinguistics and natural language processing research study before they accepted the task. For MRC vocabulary, we sample 400 words in total, and for each connected text corpus, we sample 120 sentences for each deformance. We recruit 300 workers in total: the first 100 workers rated 4 MRC words and 6 poem lines each, and the second 100 workers rated 6 captions each, and the rest of the workers rated 6 news sentences. Each worker is only allowed to participate in the entire research once. In total, we recruit 300 workers, and workers are paid $0.50 for answering 6 or 10 questions. Every participating worker has HIT approval rates for all Requesters' HITs greater than 95% and number of HITs approved greater than 100, and we require their location to be in the US or Canada. For poems, we mistakenly use two lines of deformed text as one single prompt for workers to rate, and we counter that mistake by averaging the aveCLIP, imgSim from the two lines. Table <ref> shows the linear correlations between our measurements and the AMT human judgment. We find that while MRC words and captions have relatively high, positive correlations, the linear correlations between poem lines and news sentences are insignificant. We suspect the AMT rating is noisy given that each instance is only judged by one rater. The distribution of human judgments in the appendix also shows interesting variations in rating behavior across corpora.
§ DISCUSSION
Acquiring human imageability judgments is costly and laborious, which makes expanding existing imageability databases difficult. We propose two computational methods that utilize an open-source text-to-image model to estimate isolated words and connected text imageability. Both of our methods require only the input text, and the estimated imageability is calculated based on the properties of the generated images: average CLIP scores and average pairwise image embedding cosine similarity. On isolated words, our proposed methods aveCLIP and imgSim outperform previous unsupervised method proposed by <cit.>: aveCLIP has a linear correlation of 0.537 to MRC human judgment, followed by imgSim 0.429, and Hessel et al. 0.415. Our proposed methods aveCLIP and imgSim also achieve relatively high linear correlations of 0.350 and 0.316 respectively with AMT human judgment, despite the noisiness of collecting that data.
For connected text, we test our methods on three different corpora: poem lines, captions, and news sentences. Unlike isolated words, sentences' meaning is compositional and depends on word choice and word order. A good sentence imageability method, therefore, should detect the change in imageability caused by compositional change. The biggest downfall of previous bag-of-words methods is that when a sentence is subject to a deformance such as permutation, imageability is unchanged, which is contradictory to human expectation. With a text-to-image model, our methods are able to take the entire sentence as one entity, preserving its composition. We test our methods against a noisy AMT human judgment (Table <ref>) and obtained vastly different performances on different styles of connected text. We further inspect the performance of these methods by examining the percent change between different deformances and the original text (Table <ref>). Our methods overall follow human expectation: the imageability level goes down when the original sentence is under a deformance, although we expect our methods to manifest more significant change. In comparison, the direction of change with the method of Hessel et al. doesn't follow our expectation. Although their method can take DALL•E mini generated images as input images, which allows it to learn compositionality from images generated from deformed prompts, it ultimately calculates each sentence's score as a sum of all words in that sentence. Each word’s concreteness score is estimated from multiple sentences of the same deformance that contain that word. We know that a word’s meaning varies in different sentences, thus this method loses a word’s contextual meaning and can not precisely understand compositionality of a sentence.
The language of these three different corpora is very different. Overall, image captions have the highest average imageability rating as well as Brysbaert et al. concreteness rating, with poems being the second most imageable, and news sentences being the second most concrete. Since image captions' language is usually concise, and it possibly has higher noun density, it’s reasonable to see overall the highest impact from deformances under aveCLIP and imgSim. All three corpora experience negative impact from permutation under both aveCLIP and imgSim, which we assume to be the strongest deformance since it completely randomizes the word order of a sentence. When aveCLIP and imgSim have opposite signs, we notice that a higher absolute value from one measurement also tends to result in a lower absolute value of the other measurement if their signs are different, thus we use Fig <ref> to explore the percent change distribution between two measurements. In Fig <ref>, we look at the original lines with the highest and lowest 10% aveCLIP and imgSim and inspect the percent change between them and their different deformances (Fig <ref>): the lines with the highest scores consistently decrease their scores after deformances, and vice versa. While the lines with top scores follow our intuition, the increase in lines with low scores reverses the mean: given the lowest score is 0, there isn't much room for the imageability score to fall further.
The performance difference between different connected text also makes us wonder if the training data of the text-to-image model has an effect on the performance. The performance on captions is more contrastive in Table <ref>, while one of DALL•E mini's training datasets is also Conceptual 12M.
Future work should consider further ways of measuring imageability computationally. As more text-to-image models become available and hopefully more transparent with their training process, we hope researchers will be able to compare different models' performance.
§ LIMITATIONS
Since DALL•E mini is trained on English-language material, and since our input text is English only, our proposed methods will only be able to measure the imageability of English isolated words and connected text.
The text-to-image model we use, DALL•E mini, requires GPUs or TPUs to generate images. While we used 4 GPUs (see section 4 for more details) to obtain the results in this paper, we were able to use a single GPU to successfully run the same experiments with longer runtime.
§.§ AMT experiments
We didn't ask the AMT workers what device they were on. Some workers provided feedback via email saying that on mobile phones, the AMT interface didn't show the complete description of the task before they accepted it. Although during the task, detailed instruction was provided, and workers had access to both the brief and long versions of the instruction at any time during the task. It's unclear how the interface will affect the workers' performance and if it would significantly bias their judgment of text imageability.
We were only collecting a single human judgment for each text input. In retrospect, collecting several human ratings per text input and using the average would have reduced noise.
§.§ Other text-to-image models
Stable diffusion: using HuggingFace Stable Diffusion release, we generated images using every 2 poem lines as described in section 5.1. The number of generated images per prompt was significantly less than 16, and most prompts generated images that were labeled as harmful even when the prompt didn't have suggestive content. Given this behavior, we decided not to use Stable Diffusion, but we'd like to see future development of Stable Diffusion that allows it to generate abundant and safe images given a prompt.
§ ETHICS CONCERNS
Potential risks: DALL•E mini has potential risks of generating offensive images and is vulnerable to other misuses. The poetry corpus we use contains language that might cause DALL•E mini to generate suggestive images. We are concerned about the ethical issues raised by DALL•E mini and similar models and hope further study of DALL•E mini will develop guidelines for responsible use.
§ ACKNOWLEDGEMENTS
Si Wu was supported by a grant from the Andrew W. Mellon Foundation's Scholarly Communications and Information Technology program. Any views, findings, conclusions, or recommendations expressed do
not necessarily reflect those of the Mellon Foundation. We would like to thank Justine Kao and Dan Jurafsky for providing us with their dataset, and we appreciate all the feedback from anonymous reviewers.
§ A SAMPLE OF THE ORIGINAL POEM AND ITS DEFORMED TEXT'S GENERATED IMAGES (FIGURE <REF>)
§ AMT INSTRUCTION DETAILS
The header: "Please rate the ease or difficulty with which the word/sentence arouses imagery. If an image quickly forms in your mind when reading, give the text a high rating. Only 1 HIT allowed per user."
The short instruction: "Please rate each item from one (low) to seven (high) according to the ease or difficulty with which the item arouses imagery. Any item which, in your estimation, arouses a mental image (i.e., a mental picture, or sound, or other sensory experience) very quickly and easily should be given a high imagery rating; any word/sentence that arouses a mental image with difficulty or not at all should be given a low imagery rating. Please do not go back to refer to your previous ratings."
§ A SAMPLE SCREENSHOT OF THE AMT INTERFACE (FIGURE <REF>)
§ THE AMT HUMAN RATING DISTRIBUTIONS (FIGURE <REF>)
|
http://arxiv.org/abs/2306.02138v2
|
20230603153708
|
Isospin violating decays of vector charmonia
|
[
"Chao-Qiang Geng",
"Chia-Wei Liu",
"Jiabao Zhang"
] |
hep-ph
|
[
"hep-ph",
"hep-ex"
] |
School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China
University of Chinese Academy of Sciences, 100190 Beijing, China
We study the isospin violating decays of vector charmonia to ΛΣ^0 and its charge conjugate.
They are dominated by the single photon annihilation and can be evaluated reliably with timelike form factors.
We utilize the quark-pair creation model, which is valid for the OZI suppressed decays, to evaluate the form factors.
We obtain the branching fractions of B(J/ψ→ΛΣ^0+c.c.)=(2.4±0.4)×10^-5 and B(ψ(2S)→ΛΣ^0+c.c.)=(3.0±0.5)×10^-6, which are compatible with the measurements by the BESIII collaborations, respectively.
The decay asymmetries are found to be α_J/ψ=0.314 and α_ψ(2S)=0.461, which can be examined at BESIII in the foreseeable future.
Isospin violating decays of vector charmonia
Chao-Qiang Geng, Chia-Wei Liu and Jiabao Zhang[[email protected]]
July 31, 2023
==================================================================================
§ INTRODUCTION
The decays of vector charmonia (ψ) into baryon and antibaryon have recently been thoroughly studied at BESIII.
On the one hand, the branching fractions and decay asymmetries
have been precisely measured
<cit.>.
On the other hand, since the produced baryon-antibaryon pairs are entangled, their sequential decays are utilized as sensitive probes to CP asymmetries <cit.> generated by new physics (NP) <cit.>.
In Table <ref>, we list the branching fractions ( B) and decay asymmetries (α) for ψ decaying to a pair of octet baryon-antibaryons. It is interesting to point out that the measured α between ΣΣ and the others differ in sign, suggesting large breaking effects of the SU(3) flavor symmetry <cit.>, which might attribute to NP.
In this work, we focus on the isospin-violating effects.
One way to examine them is to compare the differences among the isospin multiplets. Explicitly, the experimental data of B(J/ψ→Ξ^- Ξ^+ /Ξ^0 Ξ^0 ) shows a 10 % deviation against the isospin symmetry prediction.
To further study the isospin violation in baryonic decays, the most direct way is to investigate the decays of vector charmonia ψ to ΛΣ^0 and the corresponding charge conjugates, which explicitly violate the isospin symmetry.
In particular, the BESIII collaboration has measured the branching fractions as <cit.>
B(J/ψ→ΛΣ^0+c.c)=(2.83±0.23)×10^-5, B(ψ(2S)→ΛΣ^0+c.c)=(1.6±0.7)×10^-6,
whereas the CLEO-c collaboration found <cit.>
B(ψ(2S)→ΛΣ^0+c.c)=(1.23±0.24)×10^-5,
which is almost an order of magnitude larger than the BESIII's measurement.
Several theoretical studies have been dedicated to these decay modes <cit.>, where the electromagnetic amplitudes are fitted from the experimental data.
As we will see later,
the amplitudes of J/ψ→γ^*→ hh' with h^(') an arbitrary hadron can be calculated by the timelike form factors.
In this work, we adopt the quark pair creation model (QPC), also known as the ^3P_0 model, to describe the quark-antiquark pair creation from the vacuum <cit.>, which may originate from the gluon condensation <cit.>.
The model has been widely used in the OZI-allowed hadronic decays <cit.>.
By exploiting it in these OZI-suppressed modes, we provide a direct evaluation of the branching fractions and decay asymmetries of these isospin-violating channels.
All of the decay modes considered in this work can be tested at BESIII.
This paper is organized as follows.
In Sec. <ref>, we show the formalisms which combines the homogeneous bag and quark pair creation models.
The numerical results are given in Sec. <ref>.
Sec. <ref> is the conclusion.
§ FORMALISM
The leading amplitudes of A(ψ→ hh') are classified into three categories: A^ggg, A^γ and A^ggγ, where A^X represents A( ψ→ X → hh').
Note that A^gg is forbidden by the parity conservation and A^γ / A^ggg∝α_em/ α_s^3≈ 1/2 with α_em(s) being the fine structure constant of QED (QCD)[
This naïve estimation is compatible to the experimental branching fraction of B(J/ψ→γ^* →hadrons)/ B(J/ψ→ ggg) = 0.211± 0.006 <cit.>.
].
In the case of the isospin violating decays, the hierarchy is inverted as A^γ≫ A^ggg since the latter is further suppressed by the smallness of the down quark mass, and
it suffices to consider A^γ solely, depicted in FIG. <ref>.
In the following, we will consider the isospin violating decays exclusively and set the quark masses of m_u and m_d to be equivalent. Accordingly, we focus on A^γ and drop its superscript as confusions are not possible, which A^γ is decomposed according to the helicities as [Without lost of generality, we take the velocities of hh' and the polarization of ψ toward ẑ. For numerical evaluations, we use λ_ψ = λ_h -λ_h', where λ_ψ is the angular momentum of ψ toward ẑ. In the Briet frame, the dependence of ϵ^μ(λ_ψ) on λ_ψ is given as ϵ_μ(+)=1/√(2)(0,1,i,0), ϵ_μ(-)=1/√(2)(0,-1,i,0) and ϵ_μ(0)=(M_-/M_ψγ v,0,0,-M_+/M_ψγ), where M_±=M_h± M_h', γ=1/√(1-v^2) and v is the magnitude of the final state velocity.]
A_λ_hλ_h^'=4πα Q_cf_ψ/M_ψ∑_q=u,d,s Q_qM^q_λ_hλ_h^', M^q_λ_hλ_h^'=ϵ_μ⟨λ_h; λ_h^'|q̅γ^μ q|0⟩,
where Q_q is the electric charge of q, λ_h^(') is the helicity of h^('), and f_ψ, M_ψ and ϵ_μ are the decay constant, mass and polarization vector of ψ, respectively.
As the isospin has to be violated, the conserved parts of the amplitudes vanish, given by
M^u_λ_hλ_h^'+M^d_λ_hλ_h^'=M^s_λ_hλ_h^'=0,
leading to ∑_q Q_qM^q_λ_hλ_h^'=M^u_λ_hλ_h^'.
The branching fraction of ψ→ hh^' is given as
B(ψ→ hh^')=1/3|p⃗_h|/8π M_ψ^2Γ_ψ∑_λ_h,λ_h^'| A_λ_hλ_h^'|^2,
where Γ_ψ is the total decay width of ψ, p⃗ is the 3-momentum of h in the rest frame of ψ.
For e^+ e^-→ψ→ hh', there is an additional parameter in distributions
dΓ/dcosθ∝1+αcos^2θ, α=| A_T|^2-2| A_L|^2/| A_T|^2+2| A_L|^2,
where ( A_T, A_L) correspond to ( A_+-, A_++) for ψ→ΛΣ^0, and θ is the angle between the 3-momenta of e^+e^- and hh'.
In this work, we focus on the baryonic final states ΛΣ^0 and its charge conjugate.
The matrix element in Eq. (<ref>) can be further parametrized by the timelike form factors (G_E, G_M) and (f_1,f_2) as
M^q_λ_Λλ_Σ^0 =ϵ_μu̅[G_M(q^2)γ^μ+M_+/q^2(G_M(q^2)-G_E(q^2))q^μ]v,
=ϵ_μu̅[f_1(q^2)γ^μ+f_2(q^2)iσ_μνq^ν_+/M_+]v,
where q^μ=p^μ± p^'μ and p^μ (p^'μ) and u (v) are the 4-momentum and Dirac spinor of Λ (Σ^0).
The form factors are related to the helicity amplitudes as
A_T=√(2(M_ψ^2-M_-^2))G_M(s), A_L =M_+/M_ψ√(M_ψ^2-M_-^2)G_E(s).
In this work, we adopt the homogeneous bag model and ^3P_0 model to evaluate the electromagnetic baryonic form factors in the timelike region.
Such form factors can be measured with high precision at the BESIII experiment <cit.>, which could provide valuable information for theoretical study.
§.§ Homogeneous bag model
In the homogeneous bag model, a baryon state is constructed by acting quark field operators on the vacuum state,
which effectively couples a baryon to quarks by wave functions.
We take Λ baryon as an example, given as
|Λ,p⃗=0,↑⟩=∫[d^3x⃗]1/√(6)ϵ^αβγu^†_aα(x⃗_u)d^†_bβ(x⃗_d)s^†_cγ(x⃗_s)Ψ^abc_↑[ud]s([x⃗])|0⟩,
where q^†_aα(x⃗) is the field operator which creates quark q at position x⃗, a and α are the spinor and color indices, respectively.
The three quarks are combined into a baryon by the color and spin-flavor-spatial wave function ϵ^αβγ and Ψ^abc_[ud]s.
We have used the shorthand notation [x⃗]=(x⃗_u,x⃗_d,x⃗_s) and [d^3x⃗]=d^3x⃗_ud^3x⃗_dd^3x⃗_s.
The wave function Ψ^abc_↑[ud]s for Λ baryon is given as
Ψ^abc_↑[ud]s([x⃗])= N_Λ/√(2)∫ d^3x⃗( ϕ_u↑^a(x⃗_u^ ')ϕ_d↓^b(x⃗_d^ ')-ϕ_u↓^a(x⃗_u^ ')ϕ_d↑^b(x⃗_d^ ')) ϕ_s↑^c(x⃗_s^ '),
where ϕ is the static bag wave function described in Appendix A,
the subscript [ud] indicates the wave function is antisymmetric in swapping u and d quarks, and
x⃗'_q = x⃗_q - x⃗ is the position of quark q with respect to the bag center x⃗.
In the original bag model, the hadron state is described by a single bag with its center located at x⃗=0. However, this configuration is not invariant under Poincaré transformations and thus cannot be considered as an eigenstate of four-momentum.
To reconcile the inconsistency, the homogeneous bag model is introduced by duplicating the bag and distributing homogeneously over
the three-dimensional position space (x⃗) <cit.>.
By construction, 3-dimensional space points are treated equally in Eq. (<ref>).
Such a hadron state is more suitable for describing the decays of hadrons, and has been extensively used in various baryon decays <cit.>.
As shown in Eq. (<ref>), the decay of ψ→ΛΣ^0 are described by the electromagnetic form factors of ΛΣ^0.
However, the form factors cannot be evaluated even if the hadron wave functions are known. Besides the quark-antiquark pair produced by the photon, two additional pairs of quark-antiquarks are needed in order to form a baryon and an antibaryon.
A possible way to calculate the creation matrix element of the baryon-antibaryon pair is to adopt the crossing symmetry on the hadron level <cit.>.
Nevertheless, it is done by assuming the absence of a singularity in form factors.
In this paper, we adopt the ^3P_0 model to describe the creation of quark-antiquark pairs.
By inserting the ^3P_0 transition operator, the timelike form factors are directly evaluated.
Details about this model and our approach can be found in the rest of this section.
§.§ ^3P_0 model
In the ^3P_0 model, the quark-antiquark pairs are created by the transition operator <cit.>
T_q=√(3) γ_q ∫ d^3x⃗ :q(x⃗) q(x⃗): ,
where γ_q is a dimensionless parameter that describes the strength of the creation, and √(3) is a color factor.
The ^3P_0 operator is the simplest effective operator that creates the quark-antiquark pair, which may originate from the fundamental quantum chromodynamics (QCD) interaction between quarks and gluons.
The gluon-quark couplings in QCD as well as the condensations are all effectively absorbed into γ_q <cit.>.
Therefore, it is reasonable to expect a universal, model-independent strength parameter γ_q running with the energy scale as
γ_q(μ)=γ_q0/log(μ/μ_0).
In the phenomenological practice, it suffices to fit the parameters γ_q0 and μ_0 from various decay experiments, which are
adopted from Ref. <cit.> in this work. We emphasize that γ_q shall not depend on hadron wave functions as it essentially describes the creations of (anti)quarks at the quark level.
In the previous literature, the ^3P_0 model is mostly used in the cooperation with
the nonrelativistic (NR) hadron wave functions. At the first glance, it may seem that it conflicts with the bag model, which is essentially a relativistic quark model. However,
the relativistic corrections in ψ→ΛΣ^0 are rather small and the HBM has a well-defined NR limit. We also present the results in the NR limit in Sec. <ref>.
With these phenomenological models, the matrix element is now given as
M^u_λ_Λλ_Σ^0 =ϵ_μ⟨λ_Λ; λ_Σ^0|u̅γ^μ u T_dT_s |0⟩
=3γ_q^2
∑_[λ] N^λ_Λλ_Σ^0 ([λ]) ∫ d^3x⃗_ΔΓ^λ_ ψ_λ_uλ_u̅(x⃗_Δ)
E_λ_dλ_d̅(x⃗_Δ)E_λ_sλ_s̅(x⃗_Δ),
where [λ] collects all the quark spins and
N^λ_Λλ_Σ^0([λ]) is the spin-flavor overlapping.
The integration over x⃗_Δ is directly related to the integration over all the bag centers x⃗ in Eq. (<ref>), which distinguishes the HBM with the original bag model.
The vertex functions of Γ^ψ_λ_uλ_u̅ and E_λ_qλ_q̅ correspond to the productions of the quark-antiquark pairs due to the QED vertex and T_q, respectively, given as
Γ^λ_ ψ_λ_uλ_u̅(x⃗_Δ)
= ∫ d^3 x⃗_u G
^λ_ ψ_λ_uλ_u̅
=∫ d^3x⃗_uϕ^†_uλ_u(x⃗_u+1/2x⃗_Δ)Υϕ̃^*_u̅λ_u̅(x⃗_u-1/2x⃗_Δ),
E_λ_qλ_q̅(x⃗_Δ)= ∫ d^3 x⃗_q E
_λ_qλ_q̅
=
1/γ∫ d^3x⃗_qϕ^†_qλ_q(x⃗_q+1/2x⃗_Δ) Sϕ̃^*_q̅λ_q̅(x⃗_q-1/2x⃗_Δ),
where Υ= S_vγ^0ϵ_μγ^μ S_-v, S≡ S_vγ^0S_-v and S_± v=(√(γ+1)±√(γ-1)γ^0γ^3)/√(2) boost the wave function towards ± z direction[
At the risk of abuse of notation, we adopt the conventional definition of γ = 1/ √(1-v^2) in S_± v, which shall not be confused with the couplings of γ_q in T_q.
], and
ϕ̃ is the charge conjugation of the wave function ϕ.
Note that there is no spectator quark, which clearly differs from the form factors at the spacelike region.
Besides, without introducing ^3P_0 operators, we have S=1 in Eq. (<ref>), leading to vanishing E_λ_qλ_q̅(x⃗_Δ) for arbitrary spin configurations.
As we can see from Eqs. (<ref>) and (<ref>), one has to perform a twelve-fold integral to obtain the final result.
After choosing the appropriate coordinates for different integrals, we manage to reduce the complexity of the calculation significantly, described in Appendix A.
By plugging Eq. (<ref>), we find that
A_T= C_B ⟨ 2Γ^+_–E_+-^dE_++^s-Γ^+_-+E_+-^dE_+-^s+Γ^+_+-E_-+^dE_+-^s-2Γ^+_+-E_–^dE_++^s⟩,
A_L= C_B⟨-2Γ^0_-+E_++^dE_+-^s+2Γ^0_++E_-+^dE_+-^s⟩,
C_B=4πα Q_cf_ψ/M_ψ3γ_q^2 N_Λ N_Σ^01/2√(3) ,
where N is the normalization constant and ⟨⋯⟩ stand for ∫ d^3 x⃗_Δ.
§ NUMERICAL RESULTS
We extract f_ψ through the experiments of
B( ψ→ e^+e^-) and find that f_J/ψ=416 MeV and f_ψ(2S)=294 MeV.
The bag radius of
Λ and Σ^0 are taken to be 5 GeV^-1.
The running of γ_q(μ) is taken from Ref. <cit.>, fitted from the decay widths of heavy mesons, where μ is the energy scale.
To be conservative, we consider 10% variations of μ, leading to γ_q=0.295(14) for J/ψ and 0.278(13) for ψ(2S).
Remarkably, the numerical results depend little on the bag radius.
The numerical results of the timelike form factors (G_E, G_M) and (f_1,f_2) are listed in Table <ref>.
There are only two dimensionful parameters in the model, which correspond to the bag radius R and the strange quark mass m_s.
The form factors are dimensionless and thus depend only on m_sR, which vanishes in the SU(3)_F limit.
We plot the form factors versus m_sR
in FIG. <ref>, which shows slight dependency.
In Table <ref>, we present the branching fractions and decay asymmetries.
The dependence on γ_q is canceled in α_ψ, leading to negligible uncertainties on α_ψ.
The predicted B of J/ψ→ΛΣ^0+c.c. is consistent with the BESIII measurements, whereas the ones of ψ(2S) sits between the experimental measurements at the BESIII <cit.> and CLEO <cit.> collaborations.
From Table <ref>, we can see that |G_M| for ψ(2S) is larger than that for J/ψ, which contradicts the common belief that form factors should decrease as q^2 increases.
In this work, we consider two scenarios.
In Table <ref>,
the upper row results of ψ(2S)
are evaluated by taking
G_E,M( M_ψ(2S)^2 ) = G_E,M( M_J/ψ^2 ), while the lower row results by calculating directly within the ^3P_0 model.
The first scenario favors B measured at BESIII, while the second at CLEO-c.
We note that the branching ratio between J/ψ and ψ(2S) in the first scenario is compatible with the naïve expectation of B_ψ(2S)^ee/ B_J/ψ^ee≈ 13% with B_ψ^ee the branching fraction of ψ→ e^+e^-.
To examine the results, we consider the nonrelativistic (NR) limit by taking m_q→∞ and the results are also listed in Table <ref>.
One can see that they are in good accordance, which indicate that the relativistic corrections to the branching fractions are indeed quite small, as we expected.
On the contrary, α_ψ(m_q→∞) =1 changes significantly because the spin-flipping terms E_±± vanish in the NR limit, resulting in A_L=0.
§ CONCLUSIONS
In this work, we study the isospin violating decays of vector charmonia ψ in both baryonic and mesonic sectors.
Such decays are attributed to the single photon annihilation and suppressed by the OZI rule.
We utilize the ^3P_0 model to calculate the timelike form factors.
The branching fractions of ψ→ΛΣ^0+c.c. are prediceted as 2.4(4)×10^-5 for J/ψ and 0.30(5)×10^-5 for ψ(2S), which are both consistent with the experimental measurements at BESIII.
For the decay asymmetries, we predict α_J/ψ=0.314 and α_ψ(2S)=0.461 for ψ→ΛΣ^0, which can be tested at BESIII in the foreseeable future.
§ ACKNOWLEDGMENTS
We would like to express our sincere appreciation to Prof. Xiaorong Zhou and Zekun Jia for their valuable insights during the development of this work.
This work is supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201501 and the National Natural Science Foundation of China (NSFC) under Grant No. 12147103 and 12205063.
§ WAVE FUNCTIONS AND BAG INTEGRALS IN HBM
The quark and antiquark bag wave functions are given as
ϕ_q=[ u χ; iv r̂·σ⃗χ ],
ϕ̃_q̅=iγ^2ϕ_q,
respectively,
where u=√(E_q+m_q)j_0(p_qr),v=√(E_q-m_q)j_1(p_qr), χ is the usual Pauli spinor, with E_q=√(m_q^2+p_q^2), χ_↑=(1,0)^T and χ_↓=(0,1)^T.
The spatial distributions are governed by
the zeroth and first spherical Bessel functions of j_0,1(p_qr), where p_q is the quantized 3-momentum
tan(p_q R)=p_q R/1-m_q R-E_q R ,
and R is the bag radius of the hadron, fitted from the mass spectrum.
We adopt the following normalization condition for the baryon states
⟨Λ⃗, λ_Λ|Λ⃗^', λ_Λ^'⟩=u_Λ^† u_Λ^'(2 π)^3 δ^3(p⃗_Λ-p⃗ ^'_Λ),
where u_Λ,λ_Λ are the spinor and the spin of the Λ baryon, its normalization factor is found to be
N_Λ=(1/u̅_Λ u_Λ∫ d^3 x⃗_Δ∏_q=u,d,s D^q(x⃗_Δ))^-1/2,
with
D^q(x⃗_Δ)=∫ d^3 x⃗_q D^q=∫ d^3x⃗_qϕ^†_q(x⃗_q+1/2x⃗_Δ)ϕ_q(x⃗_q-1/2x⃗_Δ).
Note that D^q(x⃗_Δ) is independent of the velocity and spin of the baryon.
To evaluate Eqs. (<ref>) and (<ref>),
we use the coordinate shown in Fig. <ref>, where
ẑ and ẑ' are chosen to be parallel to v⃗ and x⃗_Δ, respectively.
By changing the integration variables from d^3x⃗_q to dρ dz' d ϕ and integrating over d ϕ, we arrive at
∫ D^qdϕ=2π( E_2+ E_3+2 E_4),
∫ E_±∓dϕ=∓2π e^∓ iϕ̃sinθ(i E_1+2vcosθ E_3-2vcosθ E_4),
∫ E_±±dϕ=2π(icosθ E_1+v E_2+vcos2θ E_3-2vcos^2θ E_4),
∫ G^±_±∓dϕ
=2√(2)πγ[ivcosθ E_1+ E_2+cos^2θ E_3+sin^2θ E_4],
∫ G^±_∓±dϕ
=2√(2)πγ e^±2iϕ̃sin^2θ( E_3- E_4),
∫ G^±_±±dϕ=∫ G^±_∓±dϕ=√(2)πγ e^± iϕ̃sinθ[± iv E_1+2cosθ( E_3- E_4)],
∫ G^3_±∓dϕ
=∓2π e^∓ iϕ̃sin2θ( E_3- E_4),
∫ G^3_±±dϕ
=2π(- E_2+cos2θ E_3-2cos^2θ E_4),.
where
E_1 =z_-/r_-u^+v^--z_+/r_+u^-v^+, E_2=u^+u^-,
E_3 =z_+z_-v^+v^-/r_+r_-, E_4=1/2ρ^2v^+v^-/r_+r_-,
u^±=u(x⃗±x⃗_Δ/2), v^±=v(x⃗±x⃗_Δ/2), z_±=z±|x⃗_Δ|/2 and r_±=√(z_±^2+ρ^2).
To obtain N^λ_Λλ_Σ^0,
the spin-flavor parts of Λ and Σ^0 wave functions are
|Σ^0,↑⟩=1/√(6)(-s_↑d_↓u_↑-s_↑d_↑u_↓+2s_↓d_↑u_↑)|0⟩,
|Σ^0,↓⟩=1/√(6)(s_↓d_↑u_↓+s_↓d_↓u_↑-2s_↑d_↓u_↓)|0⟩,
|Λ,↕⟩=1/√(2)(
d^†_↓u^†_↑-d^†_↑u^†_↓)s^†_↕|0⟩.
Plugging them into Eq. (<ref>), we find
A_T∝1/2√(3) ⟨-Γ^+_–E_++^dE_+-^s+2Γ^+_–E_+-^dE_++^s-Γ^+_-+E_+-^dE_+-^s
+Γ^+_+-E_-+^dE_+-^s-2Γ^+_+-E_–^dE_++^s+Γ^+_++E_–^dE_+-^s⟩,
A_L∝1/2√(3) ⟨+Γ^0_-+E_+-^dE_++^s+Γ^0_–E_++^dE_++^s-2Γ^0_-+E_++^dE_+-^s
-Γ^0_++E_–^dE_++^s-Γ^0_+-E_-+^dE_++^s+2Γ^0_++E_-+^dE_+-^s⟩.
Due to the parity conservation, the amplitudes are invariant under the transformation (λ_ψ,λ_q, λ_q) → (- λ_ψ, - λ_q, -λ_q), leading to
⟨-Γ^+_–E^u_++E^s_+-+Γ^+_++E^u_–E^s_+-⟩=0,
⟨-Γ^0_++E^u_–E^s_+++Γ^0_–E^u_++E^s_++⟩=0,
⟨-Γ^0_+-E^u_-+E^s_+++Γ^0_-+E^u_+-E^s_++⟩=0.
9
BESIII:2016nix
M. Ablikim et al. [BESIII],
Phys. Lett. B 770, 217-225 (2017).
BESIII:2020fqg
M. Ablikim et al. [BESIII],
Phys. Rev. Lett. 125, 052004 (2020).
BESIII:2017kqw
M. Ablikim et al. [BESIII],
Phys. Rev. D 95, 052003 (2017);106, L091101 (2022)
BESIII:2023lkg
M. Ablikim et al. [BESIII],
arXiv:2302.09767;2302.13568;2304.14655.
BESIII:2018cnd
M. Ablikim et al. [BESIII],
Nature Phys. 15, 631-634 (2019).
BESIII:2022qax
M. Ablikim et al. [BESIII],
Phys. Rev. Lett. 129, 131801 (2022).
He:2022jjc
X. G. He and J. P. Ma,
Phys. Lett. B 839, 137834 (2023).
BESIII:2022rzz
M. Ablikim et al. [BESIII],
Phys. Lett. B 838, 137698 (2023);839, 137785 (2023)
BESIII:2021ges
M. Ablikim et al. [BESIII],
Phys. Rev. D 105, 012008 (2022);106, 072008 (2022)
BESIII:2022exh
M. Ablikim et al. [BESIII],
Sci. China Phys. Mech. Astron. 66, 221011 (2023).
Alekseev:2018qjg
M. Alekseev, A. Amoroso, R. B. Ferroli, I. Balossino, M. Bertani, D. Bettoni, F. Bianchi, J. Chai, G. Cibinetto and F. Cossio, et al.
Chin. Phys. C 43, 023103 (2019).
Chen:2006yn
H. Chen and R. G. Ping,
Phys. Lett. B 644, 54-58 (2007).
BESIII:2012xdg
M. Ablikim et al. [BESIII],
Phys. Rev. D 86, 032008 (2012).
BESIII:2021mus
M. Ablikim et al. [BESIII],
Phys. Rev. D 103, 112004 (2021).
Dobbs:2017hyd
S. Dobbs, K. K. Seth, A. Tomaradze, T. Xiao and G. Bonvicini,
Phys. Rev. D 96, 092004 (2017).
Claudson:1981fj
M. Claudson, S. L. Glashow and M. B. Wise,
Phys. Rev. D 25, 1345 (1982).
Zhu:2015bha
K. Zhu, X. H. Mo and C. Z. Yuan,
Int. J. Mod. Phys. A 30, 1550148 (2015).
Ferroli:2020xnv
R. B. Ferroli, A. Mangoni and S. Pacetti,
Eur. Phys. J. C 80, 903 (2020).
Wei:2009zzh
D. H. Wei,
J. Phys. G 36, 115006 (2009).
Jiao:2016syk
J. Jiao [BESIII],
PoS CHARM2016, 046 (2016).
Kivel:2022fzk
N. Kivel,
Eur. Phys. J. A 58, 138 (2022).
Mangoni:2022yqq
A. Mangoni,
arXiv:2202.08542.
BaldiniFerroli:2019abd
R. Baldini Ferroli, A. Mangoni, S. Pacetti and K. Zhu,
Phys. Lett. B 799, 135041 (2019);
Micu:1968mk
L. Micu,
Nucl. Phys. B 10, (1969).
LeYaouanc:1972vsx
A. Le Yaouanc, L. Oliver, O. Pene and J. C. Raynal,
Phys. Rev. D 8, (1973).
Ackleh:1996yt
E. S. Ackleh, T. Barnes and E. S. Swanson,
Phys. Rev. D 54, 6811-6829 (1996).
Simonov:2011cm
Y. A. Simonov,
Phys. Rev. D 84, 065013 (2011).
Weber:1988bt
H. J. Weber,
Phys. Lett. B 218, 267-271 (1989).
Chen:2007xf
C. Chen, X. L. Chen, X. Liu, W. Z. Deng and S. L. Zhu,
Phys. Rev. D 75, 094017 (2007).
Ke:2011wd
H. W. Ke, Y. Z. Chen and X. Q. Li,
Chin. Phys. Lett. 28, 071301 (2011).
Wang:2013lpa
T. Wang, G. L. Wang, H. F. Fu and W. L. Ju,
JHEP 07, 120 (2013).
Gong:2021jkb
K. Gong, H. Y. Jing and A. Zhang,
Eur. Phys. J. C 81, 467 (2021).
Garcia-Tecocoatzi:2022zrf
H. Garcia-Tecocoatzi, A. Giachino, J. Li, A. Ramirez-Morales and E. Santopinto,
arXiv:2205.07049.
ParticleDataGroup:2022pth
R. L. Workman et al. [Particle Data Group],
PTEP 2022, 083C01 (2022).
Xia:2021agf
L. Xia, C. Rosner, Y. D. Wang, X. Zhou, F. E. Maas, R. B. Ferroli, H. Hu and G. Huang,
Symmetry 14, no.2, 231 (2022).
Liu:2022pdk
C. W. Liu and C. Q. Geng,
arXiv:2205.08158.
Geng:2020ofy
C. Q. Geng, C. W. Liu and T. H. Tsai,
Phys. Rev. D 102, 034033 (2020);
J. Zhang, X. N. Jin, C. W. Liu and C. Q. Geng,
Phys. Rev. D 107, 033004 (2023);
C. Q. Geng, X. N. Jin, C. W. Liu, X. Yu and A. W. Zhou,
Phys. Lett. B 839, 137831 (2023).
Jin:2021onb
X. N. Jin, C. W. Liu and C. Q. Geng,
Phys. Rev. D 105, no.5, 053005 (2022).
Segovia:2012cd
J. Segovia, D. R. Entem and F. Fernández,
Phys. Lett. B 715, 322-327 (2012).
BaBar:2011btv
J. P. Lees et al. [BaBar],
Phys. Rev. D 86, 012008 (2012);
M. Ablikim, J. Z. Bai, Y. Bai, Y. Ban, X. Cai, H. F. Chen, H. S. Chen, H. X. Chen, J. C. Chen and J. Chen, et al.
Phys. Lett. B 693, 88-94 (2010).
|
http://arxiv.org/abs/2306.02060v1
|
20230603091833
|
A unified Bayesian inversion approach for a class of tumor growth models with different pressure laws
|
[
"Yu Feng",
"Liu Liu",
"Zhennan Zhou"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"62F15 92-10"
] |
Bayesian inversion for a class of tumor growth models]A unified Bayesian inversion approach for a class of tumor growth models with different pressure laws
Yu Feng: Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road Haidian District, Beijing, P.R.China 100871
[email protected]
Liu Liu: Department of Mathematics, The Chinese University of Hong Kong, Lady Shaw Building, Ma Liu Shui, Hong Kong, China
[email protected]
Zhennan Zhou: Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road Haidian District, Beijing, P.R.China 100871
[email protected]
[2010]
[
Yu Feng, Liu Liu, Zhennan Zhou
July 31, 2023
==================================
In this paper, we use the Bayesian inversion approach to study the data assimilation problem for a family of tumor growth models described by porous-medium type equations. The models contain uncertain parameters and are indexed by a physical parameter m, which characterizes the constitutive relation between density and pressure. Based on these models, we employ the Bayesian inversion framework to infer parametric and nonparametric unknowns that affect tumor growth from noisy observations of tumor cell density. We establish the well-posedness and the stability theories for the Bayesian inversion problem and further prove the convergence of the posterior distribution in the so-called incompressible limit, m →∞. Since the posterior distribution across the index regime m∈[2,∞) can thus be treated in a unified manner, such theoretical results also guide the design of the numerical inference for the unknown. We propose a generic computational framework for such inverse problems, which consists of a typical sampling algorithm and an asymptotic preserving solver for the forward problem. With extensive numerical tests, we demonstrate that the proposed method achieves satisfactory accuracy in the Bayesian inference of the tumor growth models, which is uniform with respect to the constitutive relation.
§ INTRODUCTION
In recent years, mathematical modeling has become an increasingly important tool in tumor research. By using mathematical models to simulate tumor growth and evolution, one can better understand the underlying mechanisms that drive tumor progression. However, most existing work on mathematical models in tumor research is limited to formulation and analysis, which means that they are designed to predict how a tumor will develop given certain initial conditions and parameters. And it needs to be emphasized that due to the limitations in understanding the tumor growth mechanism, various models exist in the current literature, such as stochastic models based on reaction-diffusion equations <cit.>, phase field models based on Cahn-Hilliard equations <cit.>, and mechanical models based on porous media equations <cit.>. We suggest the following textbooks <cit.> and review articles <cit.> as references for interested readers.
As tumor growth is a rather complex biological process, it develops in distinguishable phases and is affected by various factors. Many mathematicians are devoted to incorparate these elements in modeling and analyze their individual and synergistic effects, such as nutrient concentration <cit.>, degree of vascularization <cit.>, cell reproduction and apoptosis <cit.>, chemotaxis <cit.>. However, the development of the model library also raises an alarming issue, the model identification and the parameter calibrations in the equations are becoming significantly more challenging as well.
The presence of unknown parameters and the difficulty of validating models against experimental data are major obstacles in the practical application of these tumor models. Therefore, studying the inverse problem in tumor growth has both theoretical and practical values. For example, by conducting model selection and parameter inferences, researchers can gain insights into the underlying mechanisms driving tumor growth and progression <cit.>. Also, the inverse problem can be used to optimize treatment strategies for individual patients by predicting the efficacy of different treatments <cit.>.
The study on the inverse problem for tumor growth has a shorter history compared to the forward modeling but has received significant attention in recent years. In the context of tumor growth modeling, the inverse problem aims to estimate the unknown parameters in the model (e.g., proliferation rates, diffusion coefficients, etc.) that govern the growth of tumors via the observed data such as tumor images or size measurements <cit.>. Moreover, various methodologies have also been developed for concerning the inverse problem in tumor growth models, such as Tikhonov regularization method <cit.>, Bayesian inference <cit.>, Machine learning algorithms <cit.> and so on.
In particular, among the methodologies above, Bayesian inference has emerged as a promising approach for solving the inverse problem in tumor growth modeling <cit.>. This approach involves combining prior knowledge about the unknown model parameters with likelihood functions that capture the probability of observing the available data. Bayesian methods have been used to estimate parameters in various tumor growth models, such as reaction-diffusion model <cit.>, phase field model <cit.>, and mechanical model (degenerate diffusion model) <cit.>. Additionally, Bayesian approaches can be combined with Uncertainty Quantification (UQ) methods to generate probabilistic predictions of tumor growth dynamics, providing insight into the uncertainty associated with the estimated model parameters and guiding us in assessing the reliability and robustness of the estimated parameters and their predictions.
Despite the progress made in inverse problems and UQ studies for tumor growth, many challenges remain. In particular, due to the diversity and hierarchy in the model library, it becomes inefficient to design tailored treatments for specific models.
In this paper, we consider the inverse problem of a family of mechanical models for tumor growth described by porous-medium type equations. The tumor cell density evolves as follows
∂/∂ tρ - ∇·(ρ∇ p )=g(x,t,ρ), p=m/m-1ρ^m-1, m≥ 2.
Here, ρ denotes the cell density, p denotes the pressure and g is the growth factor. For simplicity, we take g=h(x) ρ, where h is the growth rate function manifesting the local condition of the growing environment.
We can index these models according to the physical parameter m, which specifies the constitutive relation between density and pressure.
Such models share the same physical laws but obey different constitutive relations, a phenomenon that is reminiscent of kinetic models containing different collision kernels or fluid mechanical models with different pressure relations <cit.>. It is worth mentioning that the physical parameter is also similar to the scaling parameter ε in multiscale models <cit.>, but they also differ significantly, since as m varies, the nonlinearity structure changes as well, which cannot be recovered by rescaling.
Without loss of generality, we consider two types of unknowns in the inverse problem: the non-parametric and the parametric ones. The former refers to unknown functions without additional assumptions on their functional forms, such as the growth rate function h, and the latter refers to finite-dimensional parameters associated with unknowns in some prescribed forms, such as shape parameters specifying the initial profile.
In this work, we study the Bayesian inversion problem for model (<ref>) indexed by m ∈ I= [2,∞), and aim to provide a unified computation framework for such inverse problems. To be more precise, the numerical method is supposed to not only produce stable and reliable parameter inference for each model with fixed m, but also we expect that the numerical results should exhibit uniform accuracy across the index regime m∈ I. In particular, it is necessary to rule out the possibility that the numerical performance degenerates as m→∞.
From the Bayesian point of view, we seek a probabilistic solution to the inverse problem in the form of a posterior distribution μ_m^y, where y denotes the observed data (which will be omitted in this section) and m is the physical index. However, since the posterior distribution is often formidably high dimensional (or even possibly infinite-dimensional), sampling tools are applied to obtain a statistical presentation of the distributions. In this sense, proposing a unified computational framework for these inverse problems boils down to designing a numerical method that can efficiently sample the collection of posterior distributions {μ_m }_m ∈ I.
Our analysis of the Bayesian problems investigates the properties of the posterior distributions and thus provides theoretical foundations and insights for constructing the numerical scheme. On one hand, we establish the well-posedness theory for the Bayesian inversion problem with a given index m; on the other hand, we show that the posterior distributions converge in the limit m→∞. These results strongly yield a key observation: the probability measures in the set {μ_m }_m∈ I do not differ much besides being absolute continuous with respect to the prior distribution.
In light of this, most prevailing numerical sampling strategies, such as Markov Chain Monte Carlo (MCMC) methods, can be adopted here. Notice that when generating each sample a typical numerical scheme involves computing the likelihood function, which requires efficiently computing the forward problem. Thus a reliable numerical solver for the tumor growth models, which achieves correct approximations for m∈ I, is desired. Thanks to the previous works <cit.>, an asymptotic preserving numerical scheme has been constructed, which can accurately capture the boundary moving speed in the limit m→∞. Hence, such numerical schemes can readily be integrated into our numerical method for the inverse problem.
To sum up, the unified computational method for the Bayesian inversion problems to a family of tumor growth models consists of a plain MCMC method and an asymptotic preserving numerical solver for the forward problem. We highlight that our theoretical analysis only indicates the minimal requirements for treating the collection of posterior distributions, and it is certain that more advanced sampling techniques can be applied to further improve the numerical performance.
We that compared with other prevailing inverse problem approaches, the Bayesian approach avoids finding the estimator of the inverse or solving the optimization problem with a regularized functional, thus it offers plenty of flexibility in dealing with different models with the same approach. In a recent paper <cit.>, the authors also adopt the Bayesian inversion method to compare different tumor growth models and confirm that the pme-based models (<ref>) are more reasonable in the presence of tissue collision.
This paper is organized as follows: in Section <ref>, we introduce a family of tumor growth models described by the porous medium type equations, and set up the Bayesian inverse problem for these models, and present the unified numerical method. In Section <ref>, we establish the well-posedness and stability theory for the Bayesian inversion problems and characterize the convergence behavior of the posterior distributions in the incompressible limit, which serve as the theoretic foundation for the numerical scheme. The numerical experiments are presented in Section <ref> to verify our results in theoretical analysis. Lastly, the conclusion and future work is addressed.
§ PRELIMINARY
In this section, we begin with introducing a family of tumor growth models indexed by a physical parameter m, which are porous medium type equations and possess a Hele-Shaw-type asymptote as the index m tends to infinity. Then, we formulate the inverse problems with respect to the above models and employ a Bayesian framework to quantify parametric and nonparametric unknowns in the models based on some noisy observation data. In the last part of this section, we establish the algorithm for the inverse problem, which works for an extensive range of index m and can capture the asymptotic limit of the solutions.
§.§ A family of deterministic tumor growth model
In the first part, we adopt and introduce a family of well-studied mechanical tumor growth models that are porous medium type equations and are indexed by a physical parameter m specifying the constitutive relation between the pressure and the density (see <cit.>, section 3 ). In each mechanistic model, i.e., fixing a value of the index m, we consider the evolution of the tumor cell density over a specified domain. Moreover, as the physical index m tends to infinity, such equations have natural Hele-Shaw type asymptotes. For a complete introduction to the model, we begin with introducing the notation and physical parameters.
Let Ω be a bounded open set in ℝ^2, and we consider the growth of the tumor in this region. For T>0, define Q_TΩ×(0,T), and Σ_T∂Ω×(0,T). Let ρ(x,t) denote the cell population density, with the cells transported by a velocity field v and the cell production governed by the growth function g(x,t,ρ). Then the continuity of mass yields
∂/∂ tρ+∇·(ρ v)=g(x,t,ρ).
We further assume the velocity v is governed by Darcy's law v=-∇ p, where the pressure p further satisfies the power law: p=m/m-1ρ^m-1, with m (≥ 2) meanwhile acts as the index for the family of problems. Then the continuity of mass equation (<ref>) can be further written into:
∂/∂ tρ-∇·(ρ∇ p)=g(x,t,ρ).
Moreover, we employ the set
D(t)={ρ(x,t)>0}
to denote the support of ρ. Physically, it presents the tumoral region at time t. Then the tumor boundary expands with a finite normal speed s=-∇ p· n|_∂ D, where n stands for the outer normal vector on the tumor boundary.
Observe the fact that the expression of p enables the flux ∇·(ρ∇ p) equivalently written as Δρ^m. On the other hand, for the boundary condition, we assume ρ, so as p, vanishes on Σ_T. Besides, let f(x) be the initial data, and it can generally be an arbitrary function that takes the value in [0,1]. However, in practice, we focus on a specific class of initial data, which can simplify the regime. We leave the detailed explanation for later.
With the above assumptions, for any m≥ 2, the evolution of the tumor cell density satisfies the following system:
[left=(P_m) ]align*
ρ_t=Δρ^m+g(x,t,ρ) on Q_T,
ρ= p = 0 on Σ_T,
ρ(x,0)=f(x) on Ω.
For each fixed m≥ 2, the system (P_m) possesses a unique solution (see Theorem <ref>) under proper assumptions. In this work, we consider the growth function in the following form
g(x,t,ρ)= h(x)ρ, h(x)∈ L^∞(Ω).
The expression in (<ref>) can be understood as the cell production is determined by the cell density and a growth rate function h(x), which reflects the tumor micro-environment that may affect cell growth, such as the distribution of nutrients.
Many research (e.g. <cit.>) indicate that the porous medium type functions have a Hele-Shaw type asymptote as the power m tends to infinity. In particular, the solution of (P_m) tends to the solution of (see Theorem <ref> for precise description):
[left=(P_∞) ]align*
ρ_t=Δp_∞+g(x,t,ρ) on Q_T,
0≤ρ≤1, p_∞≥0, (ρ-1)p_∞=0 on Q_T,
p_∞= 0 on Σ_T,
ρ(x,0)=f(x) on Ω,
if the initial data f is provided to be a characteristic function f=χ_D_0, where D_0 be a bounded subset of Ω. That means the initial density is saturated in the set D_0 and vanishes outside. Actually, (P_m) converges to (P_∞) for more general initial data (see Theorem <ref>). However, the prescribed ones can simplify the regime and are enough for our purpose. And it is worth mentioning that in the Hele-Shaw model (P_∞), if the initial data is in the form of a characteristic function, then the solution remains in the form of characteristic function consistently, i.e., ρ(x,t)=χ_D(t). We refer to these solutions as patch solutions.
Furthermore, for patch solutions and g(x,t,ρ) given by (<ref>) with h(x)>0, the limit pressure p_∞(x,t) solves the following elliptic problem in the tumoral region D(t) for each time t:
-Δ p_∞ =h(x) in D(t),
p_∞ ≥ 0 in D(t),
p_∞ =0 on ∂ D(t).
And the tumor boundary propagates with a finite normal speed s=-∇ p_∞· n|_∂ D.
§.§ Set up for the inverse problem
In this section, we set up the inverse problem based on the models established in the previous section.
For each m≥ 2, consider the model (P_m) with g(x,t,ρ) given by (<ref>). And the initial data in the form of f=f_0^z(x), where f_0 is a given characteristic function χ_D_0, with D_0⊂𝔹_1(0), i.e., a subset of the unit disk centered at the origin. And z can generally be any parameters for the initial data with a prescribed form, such as the center and the scaling (or size). Then the problem (P_m) can be further written as:
[left=(P_m') ]align*
ρ_t=Δρ^m+h(x)ρ on Q_T,
ρ=p= 0 on Σ_T,
ρ(x,0)=f_0^z(x) on Ω.
Our primary interest is identifying two types of unknowns in the problem (P_m') from some noisy observations that will be specified later. The first unknown type collects the unknowns from the parametric form of the initial data. This type of unknowns constitute a simple finite-dimension vector. While the second kind of unknown is treated in a non-parametric way, such as the growth rate function h(x). For concision, we collect them in a single variable u as
u=(z,h(x)).
Given û=(ẑ,ĥ(x)), (P_m') has a unique solution (see Theorem <ref>), and we denote it as ρ^(m):=ρ^(m)(û). For the observations, we consider data obtained from snapshots of the tumor at several time instances, which are slightly polluted by noises. We assume that the noises cannot be directly measured but their statistical properties are known. In the work, the noises are modeled as Gaussian random variables which are independent of the unknown parameters. Mathematically, we generate the noisy observation with respect to ρ^(m) as follow:
* Fix a sequence of smooth test function {ξ_k}_k=1^K with supp(ξ_k)⊆Ω for any 1≤ k≤ K.
* Fix T>0, and let {t_j}_j=1^J (with some fixed J∈ℕ) be an increasing sequence in the time interval [0,T].
* We model the noisy observations using a set of linear functional {l_j,k}_j=1,k=1^j=J,k=K of the solution ρ^(m). Specifically, we assume that l_j,k:f↦ l_j,k(f)∈ℝ is given by
l_j,k(f)=∫_Ωξ_k(x)f(x,t_j)dx.
Then the noisy observations, denote by {y_j,k}_j=1,k=1^j=J,k=K, y_j,k∈ℝ, are expressed as
y_j,k^m=l_j,k(ρ^(m))+η_j,k, 1≤ j≤ J, 1≤ k≤ K,
where η_j,k∼ N(0,σ^2_j,k), i.e., the standard normal distribution with mean 0 and variance σ^2_j,k>0.
For concision, let data space Yℝ^J K. Define the noise vector η(η_j,k)∈ Y and the observation vector y(y_j,k)∈ Y with 1≤ j≤ J and 1≤ k≤ K.
Then (<ref>) can be written in the vector form:
y=𝒢^m(û)+η,
where the forward operator 𝒢^m(û) is the composition of the solution operator ℱ^m:=û↦ρ^(m)(û) and the observation functionals ρ^(m)↦ l_j,k(ρ^(m)), with 1≤ j≤ J and 1≤ k≤ K. And the noise vector η∼ N(0,Γ), where the covariance matrix Γ is a JK by JK diagonal matrix with diagonal elements given by σ^2_j,k>0.
For the inverse problem, we assume that m can be measured directly from experiment data, and we consider the following inverse problem: given m and the noisy data y, we aim to infer the unknown û by (<ref>) in a probability sense.
On the other hand, it is worth emphasizing that we aim to solve for a family of inverse problem indexed by m, which takes value in a semi-bounded domain [2, ∞). And thus, it is inevitable to discuss the solution behavior as m is approaching infinity.
As explained previously, the solution to (P_m') converges to the solution of the following one
[left=(P_∞') ]align*
ρ_t=Δp_∞+h(x)ρ on Q_T,
0≤ρ≤1, p_∞≥0, (ρ-1)p_∞=0 on Q_T,
p_∞= 0 on Σ_T,
ρ(x,0)=f^z_0(x) on Ω.
Let ρ^(∞)(û) be the solution to (P_∞'), with (z,h(x)) replaced by (ẑ,ĥ(x)), then one can define (ℱ^∞, 𝒢^∞) in the same way as (ℱ^m, 𝒢^m). More precisely, each component of the observation vector y is given by
y_j,k=l_j,k∘ℱ^∞(û)+η_j,k:=𝒢_j,k^∞(û)+η_j,k.
In the forward problem, one has ρ^(m)(û)→ρ^(∞)(û) in proper function space (see Theorem <ref>). For the inverse problem, since we aim to design a numerical method that works for a large range of physical index m, we not only require that the approach is uniformly well-posed for m∈ [2,∞), but also we expect the numerical performance does not degenerate as m approaches infinity.
We employ a Bayesian approach for the inverse problem to identify the unknown factor û. The Bayesian inversion is a method for solving inverse problems by using Bayes' theorem to update our beliefs about the unknown parameters by leveraging the observed data. We take the identification for problem (P_m') as an example, and the identification for problem (P_∞') can be done similarly.
To begin with, we treat the unknown û as a random variable. To distinguish with the deterministic û, we use the notation u for the random variables instead. Recall that u contains two types of components. For the parameter z, we assume it generates from a uniform distribution, and denote the measure as μ_0^z. While for the random function h(x), we assume it can be presented as:
h(x)=h_0(x)+Σ_j=1^∞γ_jζ_jϕ_j,
where h_0(x) is a determined positive L^∞ function, γ={γ_i}_i=1^∞ is a deterministic sequence of scalars, ϕ={ϕ_i}_i=1^∞ is a set of basis functions for a certain function space, and ζ={ζ_i}_i=1^∞ be an i.i.d. random sequence with ζ_i∼ N(0,1), thus we have defined the prior distribution for h, which we denote by μ_0^h; Therefore, u has a priori measure μ_0:=μ_0^z×μ_0^h, since z and ζ (so as h(x)) are independent. We leave the precise description of μ_0 to Section <ref>.
The posterior distribution obtained from Bayesian inversion represents our beliefs about the parameters and their uncertainty after data assimilation. We aim to derive the posterior distribution with respect to the noisy observation data y, which we denote as μ_m^y. The classical theory of Bayes'rule yields the following Radon-Nikodym relation <cit.> with respect to μ_m^y and μ_0:
dμ_m^y/dμ_0(u,y)=1/Z_m(y)exp(-Φ_m(u,y)),
Z_m(y)=∫_Xexp(-Φ_m(u,y))dμ_0(u),
where the potential function Φ_m(u,y) takes the form of:
Φ_m(u,y)=1/2|Γ^-1/2(𝒢^m(u)-y)|^2-1/2|Γ^-1/2y|^2.
Recall that 𝒢^m is the forward operator as in (<ref>), and Γ is the covariance matrix for the observation noise. And we can define (Φ_∞,Z_∞, μ_∞^y) analogously.
We devote ourselves to the following three main targets in the following:
* Show that the Bayesian inversion problem is well-posed to all m≥ 2.
* Show that the posterior distribution μ_m^y converges as m tends to infinity, in the sense of Hellinger distance (see Definition <ref>).
* With the theoretical understanding above, design a numerical method for the inverse problem that works uniformly well for m∈ [2,∞).
We close this section by presenting the numerical method in the next subsection and leaving the first two targets to the latter chapters.
§.§ Algorithm for the inverse problem
In this section, we establish the unified computational method for a family of tumor growth models in the Bayesian inversion framework. The two main ingredients include a plain MCMC method and an asymptotic-preserving (AP) numerical solver for the forward problem. We will explain our motivations below.
In Section <ref>, we will give theoretical proof that the posterior distribution μ_m^y is well-posed and stable for each m and further show that it converges as m→∞. This guarantees that the posterior distribution behaves as a Cauchy sequence (refer to Theorem <ref>) so that it does not vary dramatically as m increases. Due to the similarity among the posterior distributions with different m, a standard sampling method would be sufficient to accomplish the task, hence we choose the plain MCMC approach and briefly review it below. More advanced sampling techniques will be considered as future work.
MCMC method: In the Bayesian inversion approach, the complicated probabilistic models can be estimated by numerical sampling methods such as a Markov Chain Monte Carlo (MCMC), which has been widely applied in recent decades <cit.>.
In this paper, we employ a typical MCMC algorithm called the Metropolis-Hastings (MH) that constructs a Markov chain by accepting or rejecting samples extracted from a proposed distribution.
Let a probability density μ be the target distribution and defined on X, the MH algorithm starts with an initial guess θ_0, then draws news samples according to a proposed distribution q, in our case, a normal distribution. By the acceptance test with the acceptance rate α, the samples form an empirical distribution that resembles the target distribution μ. We summarize the MH algorithm below:
[htb]
Metropolis-Hastings MCMC
Generate θ^'∼ q(·|θ_k) = 𝒩(θ_k, σ_θ^2) with a given standard derivation σ_θ > 0.
Calculate the value α(θ^',θ_k) = min{1, q(θ^'|θ_k)μ(θ^')/ q(θ_k|θ^')μ(θ_k)}.
Update as θ_k+1 = θ^' with probability α(θ^', θ_k), otherwise
set θ_k+1 = θ_k.
Note that in the MCMC method, at each iteration we need a robust deterministic solver to compute the acceptance rate. Under the constitutive law of p(ρ) = m/m-1ρ^m-1, when m ≫ 1, the cell density ρ may evolve its support with sharp interface along its boundary. Moroever, both the nonlinearity and degeneracy in the diffusion
bring significant challenges in numerical simulations.
Asymptotic-preserving forward solver:
In the sampling process, one needs to evaluate the likelihood of the proposal and call for a forward problem solver. To further ensure the framework is unified for the whole family of tumor models, an efficient and robust forward solver that works for all m is needed, thus an AP scheme that can accurately capture the boundary moving speed in the limit m→∞ is necessary. Thanks to the previous work <cit.>, we adopt the AP scheme developed there as our forward solver.
We briefly summarize the key idea below. In <cit.>, a numerical scheme based on a novel prediction-correction reformulation that can accurately approximate the front propagation has been developed. The authors show that the semi-discrete scheme naturally recovers the free boundary limit equation as m →∞. By using proper spatial discretization, their fully discrete scheme has been shown to improve stability, preserve positivity and can be implemented without nonlinear solvers. For convenience, we summarize the numerical scheme developed in <cit.> in the Appendix.
§ WELL-POSEDNESS, STABILITY, AND CONVERGENCE FOR THE POSTERIOR DISTRIBUTION
In this section, we establish the well-posedness and stability results for the Bayesian inversion problems of (P_m') and (P_∞'). We emphasize that these results are held uniformly for the physical index m∈ I. In the last part of this chapter, to further exclude the possibility that the posterior diverges in the incompressible limit, where m tends to infinity, we prove that the posterior distribution indeed converges in the sense of the Helllinger distance.
§.§ Well-posedness and L^1 contraction for the forward problem
We devote this section to establishing the well-posedness and properties of the forward problems, which also served as the cornerstone for showing the well-posedness, stability, and convergence of the posterior distribution in the inverse problems.
Consider problem (P_m), and we begin with recalling the results from <cit.>. Firstly, we make following assumptions for the initial data f , and the growth function function g(x,t,ρ).
Let f∈ L^∞(Ω) with f≥ 0, and g:Q_T×ℝ_+→ℝ satisfies:
(i) g(x,t,r) is continuous in r∈ℝ_+ for a.e. (x,t)∈ Q_T,
(ii) g(·,r)∈ L_^1(Ω̅×[0,T)) for any r∈ℝ_+,
(iii) ∂ g/∂ r(x,t,·)≤ K(·) in 𝒟'(0,∞) for a.e. (x,t)∈ Q_T with K∈𝒞(ℝ_+),
(iv) g(·,0)≥ 0 a.e. on Q_T,
(v) there exists M∈ W_^1,1([0,T)) such that M'(t)≥ g(x,t,M(t)) for a.e. (x,t)∈ Q_T and M(0)≥f_L^∞(Ω).
The above assumptions implies
g(·,ρ)∈ L_^1(Ω̅×[0,T)) for any ρ∈ L_^∞(Ω×[0,T))
since
g(·,R)-K(R)R≤ g(·,r)≤ g(·,0)+K(R)R for 0≤ r≤ R,
where K(R)=max_[0,R] K.
Under above assumptions, one has the well-posedness for (P_m). We give the precise description in the following.
Under Assumption <ref>, for any m≥ 2 there exists a unique solution of (P_m) in the sense
[left= ]align*
ρ∈L^∞_([0,T)×Ω)∩𝒞([0,T);L^1(Ω)), ρ≥0, ρ(·, 0)=f(·),
ρ^m∈L^2_([0,T);H^1(Ω)) and ∂ρ/∂t=Δρ^m+g(·,ρ) in 𝒟'(Q_T).
Moreover ρ≤ M a.e. on Q_T.
Besides the well-posedness of the problems {(P_m)}_m=2^∞, the convergence of (P_m) to (P_∞) is characterized as following.
Under Assumption <ref>, for m≥ 2, let ρ^(m) be the solution of (P_m) given in Theorem <ref>. Then,
* ρ^(m)→ρ^(∞) in 𝒞((0,T);L^1(Ω)) as m→∞.
* Assuming g(·,1)≤g in 𝒟'(Q_T) with g∈ L_^2([0,T),H^-1(Ω)), then there exists a unique (ρ,p_∞) solution of (P_∞) in the sense
[left= ]align*
ρ∈𝒞((0,T);L^1(Ω)), p_∞∈L_^2((0,T),H_0^1(Ω)),
ρ(·, 0)=f(·), 0≤ρ≤1, p_∞≥0, (ρ-1)p_∞=0,
∂ρ/∂t=Δp_∞ + g(·,ρ) in 𝒟'(Q_T),
where f=fχ_[p_∞=0]+χ_[p_∞>0], with p_∞ the unique solution of the 'mesa problem':
p_∞∈ H_0^1(Ω), Δp_∞∈ L^∞(Ω), p_∞≥ 0,
0≤Δp_∞+f≤ 1, p_∞(Δp_∞+f-1)=0 a.e. Ω.
And we have ρ^(∞)=ρ.
It is easy to check that the assumptions for g(x,t,r) in the Assumption <ref> covers not only the form of (<ref>) but also the standard FKPP form employed in <cit.>. On the other hand, if the initial data f is in the form of a characteristic function, then f=f. And we only consider the initial data in such a form. Thus, (P_m') and (P_∞') as sub-case of (P_m) and (P_m) correspondingly. Theorem <ref> and Theorem <ref> provide the existence and uniqueness of solution to (P_m') and (P_∞') respectively. And Theorem <ref> also characterize the convergence of (P_m') to (P_∞').
Next, we introduce the so-called L^1-contraction property with respect to (P_m) and (P_∞). Such property is inherited from the porous medium type equations. Now, we begin with the case for (P_m).
For each m≥ 2, if ρ_1 and ρ_2 are two solutions of (P_m) associated with g_1 and g_2 satisfying Assumption <ref> respectively, then
d/dtρ_1-ρ_2_L^1≤g_1-g_2_L^1, in 𝒟'(0,T).
On the other hand, the limit problem (P_∞) possess similar property.
If (ρ_1,p_1) and (ρ_2,p_2) are two solutions of (P_∞) associated with g_1 and g_2 satisfying Assumption <ref> respectively, then
d/dtρ_1-ρ_2_L^1≤g_1-g_2_L^1, in 𝒟'(0,T).
It is important to observe the fact that Theorem <ref> holds uniformly to m∈ I, which further allows us to control the L^1 norm for the family of problems {(P_m')}_m=2^∞ uniformly. This property brings significant convenience in later showing the well-posedness and stability of the posterior distribution of this family of Bayesian inversion.
§.§ Set up for the prior measure
In the Bayesian inversion, we shall focus on the models (P_m') and (P_∞'), and treat u as a random variable. In this section we formulate the prior measure of u.
Recall that u contains two different kinds of random quantities, the parametric unknown z, and the non-parametric unknown h(x). For the former, we can assign a prior measure relatively simply. We denote X_z to be the range of z, and μ_0^z be the prior measure of it.
For a concrete example, considering the case that z=(z_1, z_2) represents the center of the initial data, then we can let the uniform distribution 𝕌[0,Z_max]^2 (with some given Z_max>0) to be the prior measure μ_0^z, and take X_z=[0,Z_max]^2.
However, on the other hand, h(x) is no longer a simple parameter or vector as z, but an element in some function space. Therefore, we have to be more careful about selecting the prior measure of it. Fortunately, there is a natural way for setting probability on separable Banach space, in which the elements can be expressed in the form of an infinite series. That is, one can write h(x) into
h(x)=h_0(x)+∑_i=1^∞γ_iζ_iϕ_i,
where h_0(x) is a deterministic function, γ={γ_i}_i=1^∞ is a deterministic sequence of scalars, ϕ={ϕ_i}_i=1^∞ is a set of basis functions, and ζ={ζ_i}_i=1^∞ be an i.i.d. random sequence.
We demonstrate how to select these scalars and functions in the following.
To begin with, we consider the eigen-problem -Δϕ=λϕ with Dirichlet boundary condition on Ω. Let ϕ_i denote the i-th normalized (with respect to ‖·‖_L^∞) eigen-function, and λ_i be the corresponding eigen-value. Then we make the following assumptions with respect to the expression (<ref>).
* h_0(x) is a known positive deterministic function that belongs to the space L^∞(Ω),
* γ={γ_i}_i=1^∞ be a deterministic sequence with γ_i=λ_i^-s/2 for some s>1,
* ζ={ζ_i}_i=1^∞ be an i.i.d. random sequence with ζ_i∼ N(0,1), thus ζ can be viewed as a random element in the probability space (ℝ^∞,ℬ(ℝ^∞),ℙ), where ℙ denotes the infinity Cartesian product of N(0,1),
* {ϕ_i}_i=1^∞ denote the normalized eigen-functions of -Δ as prescribed.
Then we let X_h denote the closure of the linear span of the functions (h_0,{ϕ_i}_i=1^∞) with respect to the norm ·_L^∞(Ω). Thus, the Banach space (X_h,·_L^∞(Ω)) is separable (recall the fact that L^∞(Ω) is not separable itself).
Furthermore, with above setting h(x) becomes a sample from the Gaussian measure μ_0^h:=N(h_0(x), (-Δ)^-s). And by a standard argument (see, e.g., Theorem 2.12. in <cit.>), we have h(x)∈ C^0,t(Ω) hold μ_0^h-a.s. for any t<1∧(s-1). Then, by embedding theory, one can further conclude h(x)∈ L^∞(Ω) hold μ_0^h-a.s..
For convenience, we further define the Banach space for u
X:=X_z× X_h
with respect to the norm
u_Xmax{| z|, h_L^∞(Ω)}
where |·| denotes the Euclidean distance on ℝ^2. And the prior measure for u, μ_0, is given by the product measure
μ_0:=μ_0^z×μ_0^h.
Then one has μ_0(X)=1.
§.§ Well-posedness and stability of the inverse problems
In this section we establish the well-posedness and stability results for the inverse problems (P_m') and (P_∞'). And we emphasize that these results hold for any m∈[2,∞], in particular m=∞ corresponds to (P_∞').
For the convenience of the reader, we recall the definition of prior measure and the noise vector here:
* Prior: u∼μ_0 measure on X, with X and μ_0 defined in (<ref>) and (<ref>) respectively.
* Noise: η∼ N(0,Γ), where Γ is a JK by JK diagonal matrix with the diagonal elements given by σ^2_j,k>0.
* Noisy observation: Consider (P_m') with any given u∈ X, then the noisy observation y∼ N(𝒢^m(u),Γ):=ℚ_0, where 𝒢^m is defined in (<ref>). Similarly, for (P_∞') one has y∼ N(𝒢^∞(u),Γ).
For later convenience, we further define the product measure ν_0 to be
ν_0(du,dy)=μ_0(du)ℚ_0(dy).
In the following, we mainly focus on the case (P_m') , but one can establish similar results to (P_∞') without any difficulty.
Our interest is the posterior distribution of u given y, denote as μ^y_m. With the prior, noise, and noisy observation above, one can first write out the Radon-Nikodym relation between μ_0 and μ^y_m as follows:
dμ_m^y/dμ_0(u)=1/Z_m(y)exp(-Φ_m(u,y)),
Z_m(y)∫_Xexp(-Φ_m(u,y))dμ_0(u),
where the potential function Φ_m(u,y) is given by:
Φ_m(u,y)=1/2|Γ^-1/2(𝒢^m(u)-y)|^2-1/2|Γ^-1/2y|^2.
And we can define (Φ_∞,Z_∞, μ_∞^y) analogously for the problem (P_∞').
Then to justify the well-posedness and stability of the posterior distribution μ_m^y reduces to the justification of the well-posedness and stability of the Radon-Nikodym relations (<ref>). To do this, following the framework in <cit.>, it is sufficient for us to check the following properties for the potential function Φ_m. And parallelly, Φ_∞ for μ_∞^y.
Consider (P_m') with any m≥ 2, let u∼μ_0. Then the potential Φ_m satisfies
* Φ_m(u,y) is ν_0 measurable (defined in (<ref>));
* there exist function M_i:ℝ^+×ℝ^+↦ℝ^+, i=1,2, monotonic non-decreasing, and M_2 strictly positive such that for all u∈ X, y,y_1,y_2∈𝔹_r(0)⊆ Y:
Φ_m(u,y) ≥ -M_1(r,u_X),
|Φ_m(u,y_1)-Φ_m(u,y_2)| ≤ M_2(r,u_X)| y_1-y_2|;
* if further
exp(M_1(r,u_X))∈ L_μ_0^1(X;ℝ),
for any r>0. Then the normalization constant Z_m given by (<ref>) is positive ℚ_0-a.s..
The above proposition hold for Φ_∞ as well. In particular, the second proposition for Φ_∞ hold with the same M_1 and M_2 as Φ_m. This can be see from the proof of Proposition <ref> and Lemma <ref>.
Before showing the above properties, we establish following auxiliary lemmas first.
Let (Z,B) be a Borel measurable topology space and assume that G∈𝒞(Z;ℝ) and that π(Z)=1 for some probability measure π on (Z,B). Then G is a π-measurable function.
For u=(z,h(x)), with h(x) satisfy Assumption <ref>. Let ρ be either ρ^(m)(any m≥ 2) or ρ^(∞) with initial condition f_0(x+z). Then for any 0≤ t≤ T, we have
ρ(t)_L^1≤π e^u_XT.
According to Theorem <ref> and Theorem <ref> (set ρ_1=ρ, ρ_2=0, and g given by (<ref>)), in either case we have
ρ_L^1 ≤f_0(x+z)_L^1+∫_0^T h(x)ρ_L_1 dt
≤π+u_X∫_0^Tρ_L^1dt.
Finally, we complete the proof by applying Grownwall's inequality.
With the support of above lemmas, we can easily verify the properties in Proposition <ref>.
For concision, we omit the superscript m and simply use ρ to denote the density.
For (1), according to Lemma <ref>, it is sufficient to us to check Φ_m(u,y) is bounded in each variable. Note that for each component of 𝒢^m(u) we have
l_j,k(ρ)
=∫_Ωξ_k(x)ρ(x,t_j)dx
≤ξ_k_L^∞(Ω)ρ(t_j)_L^1
≤ξ_k_L^∞(Ω)π e^u_XT,
where we used Lemma <ref>. Thus,
|Φ_m(u,y)| =1/2||Γ^-1/2(𝒢^m(u)-y)|^2-|Γ^-1/2y|^2|
≤ C(|𝒢^m(u)|^2+| y|^2)
≤ C(e^2u_XT+| y|^2),
Therefore, Φ_m(u,y) is bounded in each variable and we complete the proof.
For (2), the first inequality hold obviously with
Φ_m(u,y)≥ -1/2|Γ^-1/2y|^2≥ -C_Γ· r^2:=-M_1(r,u_X),
where C_Γ is a constant depend on the covariance matrix Γ. While, for the second inequality, by using the bounds in part (1), we have
|Φ_m(u,y_1)-Φ_m(u,y_2)|
=1/2||Γ^-1/2(𝒢^m(u)-y_1)|^2-|Γ^-1/2y_1|^2-|Γ^-1/2(𝒢^m(u)-y_2)|^2-|Γ^-1/2y_2|^2|
≤ C(| y_1+y_2-2𝒢^m(u)|+| y_1+y_2|)| y_1-y_2|
≤ C(r+π e^u_X T)| y_1-y_2|.
Thus, M_2(r,u_X) can be chosen as
M_2(r,u_X)= C(r+π e^u_X T)| y_1-y_2|.
For (3), utilizing (<ref>) one can show that for ℚ_0-a.s. Φ(·,y) is bounded on
X_0=[0,Z_max]^2×𝔹_1,
where 𝔹_1 stands for the unit ball in X_h. We denote the resulting bound by M=M(y), then
Z_m≥∫_X_0exp(-M)μ_0(du)>0,
where we used the fact that all balls have positive measure for Gaussian measure on a separable Banach space.
Before we establish the formal well-posedness and stability results, we introduce the Hellinger distance.
Assume μ_1 and μ_2 be two probability measures that both absolutely continuous with respect to μ_0 i.e. μ_i≪μ_0 for i=1,2, then the Hellinger distance d_H(μ_1,μ_2) between μ_1 and μ_2 is defined as
d_H(μ_1,μ_2)=(1/2∫_X(√(dμ_1/dμ_0)-√(dμ_2/dμ_0))^2 dμ_0)^1/2.
Proposition <ref> further yields the following two items.
Consider the inverse problem of finding u=(z,h(x)) from noisy observations of the form (<ref>) subject to ρ^(m) solving (P_m') (m≥ 2), with observational noise η∼ N(0,Γ). Let μ_0 be the prior measure defined in (<ref>) such that μ_0(X)=1, where X is the Banach space defined in (<ref>). Then the posterior distribution μ_m^y given by the relation (<ref>) is a well-defined probability measure.
The well-posedness of the posterior distribution is equivalent to the well-posedness of the Radon-Nikodym relation in (<ref>), which has already been checked in Proposition <ref>.
With the same set up as in Theorem <ref>, if we additionally assume that, for every fixed r>0,
exp(M_1(r,u_X))(1+M_2(r,u_X)^2)∈ L_μ_0^1(X;ℝ).
Then there exists a positive constant C(r) such that for all y_1,y_2∈𝔹_r(0)⊆ Y
d_H(μ_m^y_1,μ_m^y_2)≤ C| y_1-y_2|.
Regarding the integrability condition (<ref>), it is worth noting that the function M_1 can be chosen independent of ‖ u‖_X as specified in (<ref>). Thus, one can apply the Fernique theorem (see Theorem 7.25 in <cit.>) to obtain (<ref>).
The proof of Theorem <ref> is standard (see Section 4 of <cit.>), so we only describe the main idea but without providing a detailed proof. By a direct calculation, d_H(μ_m^y_1,μ_m^y_2) can be present as an integral in terms of |Φ_m(u,y_1)-Φ_m(u,y_2)| with respect to the prior measure. Then one can complete the proof by applying estimate in (<ref>) and the integrable condition (<ref>).
Finally, we remark that according to Remark <ref>, the above two theorems (well-posedness and stability) hold for problem (P_∞') similarly.
§.§ Convergence of the posterior distribution
In this section, to further exclude the possibility that the posterior distribution for (P_m') diverges as m tends to infinity, we show that μ_m^y indeed converges to μ_∞^y in the sense of the Hellinger distance. The formal statement is presented in Theorem <ref> below. And we emphasize that the incompressible limit of the forward problems yields pointwise convergence of the potential function Φ_m(u,y), which plays a crucial role in the proof of Theorem <ref>.
For any y∈ Y and u∼μ_0, let μ_m^y and μ_∞^y be the posterior distribution respect to (P_m') and (P_∞'), then
d_H(μ_m^y,μ_∞^y)→ 0 as m→∞.
And for any ϵ>0, there exists M>0 such that
d_H(μ_m_1^y,μ_m_2^y)<ϵ, for any m_1,m_2>M.
Given y_1,y_2∈ Y, there exists M>0 such that
d_H(μ_m_1^y_1,μ_m_2^y_2)<C| y_1-y_2| , for any m_1,m_2>M.
The above corollary directly follows from Theorem <ref> and Theorem <ref> with triangle inequality.
Now we turn to the proof of Theorem <ref>, we first show that the convergence of the forward problem yields pointwise convergence of the potential function Φ_m(u,y).
For any u∈ X and y∈ Y, let Φ_m and Φ_∞ be the potential functions for (P_m') and (P_∞') defined in (<ref>), then
lim_m→∞|Φ_m(u,y)-Φ_∞(u,y)| = 0.
Direct compute the difference between Φ_m(u,y) and Φ_∞(u,y) to get
|Φ_m(u,y)-Φ_∞(u,y)| =1/2|Γ^-1/2(y-𝒢^m(u))|^2-1/2|Γ^-1/2(y-𝒢^∞(u))|^2
≤ C| 2y-𝒢^m(u)-𝒢^∞(u)|·|𝒢^m(u)-𝒢^∞(u)|
≤ C(| y|+π e^u_X T)|𝒢^m(u)-𝒢^∞(u)|.
Observe that for each component of |𝒢^m(u)-𝒢^∞(u)| we have
| l_j,k(ρ^(m)(u))-l_j,k(ρ^(∞)(u))| ≤∫_Ω|ξ_k(x)(ρ^(m)(x,t_j)-ρ^(∞)(x,t_j))| dx
≤ξ_k_L^∞(Ω)ρ^(m)(·,t_j)-ρ^(∞)(·,t_j)_L^1(Ω)
Thus by Theorem <ref> part (1), we can conclude
lim_m→∞|Φ_m(u,y)-Φ_∞(u,y)|=0.
We now proceed to the proof of Theorem <ref>. We would like to clarify that the proof is similar to that of stability (see Theorem 4.5 in <cit.>). We emphasize the differences here. In the proof of stability, one needs to estimate the difference between |Φ_m(u,y_1)-Φ_m(u,y_2)| and via check the integrability condition (<ref>) to complete the proof. However, in the proof of Theorem <ref>, one obtains a sequence of probability integrals involved with |Φ_m(u,y)-Φ_∞(u,y)|, which possess a uniform upper bound with respect to m. Therefore, one can direct complete the proof by applying Lemma <ref> and the dominant convergence theorem.
Let Z_m(y) and Z_∞(y) denote the normalization constants for μ_m^y and μ_∞^y so that
Z_m =∫_Xexp(-Φ_m(u,y))μ_0(du)>0,
Z_∞ =∫_Xexp(-Φ_∞(u,y))μ_0(du)>0.
we checked Z_m>0 in Proposition <ref>, and the strict positivity of Z_∞ can be shown in a similar way. Let Φ_m(u,y) denote the positive part of Φ_m(u,y) in (<ref>), that is
Φ_m(u,y)=1/2|Γ^-1/2(𝒢^m(u)-y)|^2>0,
and define Φ_∞(u,y) similarly. Let _E denote the indicator function for the event E. Then by a direct calculation we get
| Z_m-Z_∞| ≤exp(1/2|Γ^-1/2y|^2)∫_X|exp(-Φ_m)-exp(-Φ_∞)|μ_0(du)
≤ C∫_X(_|Φ_m-Φ_∞|≤ 1+_|Φ_m-Φ_∞|>1)|exp(-Φ_m)-exp(-Φ_∞)|μ_0(du)
≤ C∫_X_|Φ_m-Φ_∞|≤ 1·exp(-Φ_∞)·|exp(-(Φ_m-Φ_∞))-1|μ_0(du)
+C∫_X_|Φ_m-Φ_∞|>1·|exp(-Φ_m)-exp(-Φ_∞)|μ_0(du)
≤ C∫_X_|Φ_m-Φ_∞|≤ 1·exp(-Φ_∞)(|Φ_m-Φ_∞|+O(|Φ_m-Φ_∞|^2))μ_0(du)
+C∫_X_|Φ_m-Φ_∞|>1·(exp(-Φ_m)+exp(-Φ_∞))μ_0(du)
𝒫_1+𝒫_2.
Note that by using the fact that Φ_∞ and Φ_m are both positive, one can easily check 𝒫_1 and 𝒫_2 are both integrable, and possess uniform upper bounds with respect to m. Thus by the dominated converge theorem (DCT) and Lemma <ref>, we get
lim_m→∞| Z_m-Z_∞| = 0.
Since both μ_m^y and μ_∞^y are absolutely continuous with respect to μ_0, by the definition of Hellinger distance we have
( d_H(μ_m^y,μ_∞^y))^2≤ I_m^1+I_m^2,
where
I_m^1 =1/Z_m∫_X(exp(-1/2Φ_m(u,y))-exp(-1/2Φ_∞(u,y)))^2 μ_0(du),
I_m^2 =| Z_m^-1/2-Z_∞^-1/2|^2 ∫_Xexp(-Φ_∞(u,y))μ_0(du).
By using a similar argument used to show (<ref>), one can also split the integral I_m^1 into the sets where |Φ_m-Φ_∞|≤ 1 and |Φ_m-Φ_∞|>1. Then by using the fact that Φ_∞ and Φ_m are both positive, one can apply DCT to show lim_m→∞I_m^1=0 similarly. And for I_m^2, we have
lim_m→∞I_m^2≤lim_m→∞(Z_m^-3∨ Z_∞^-3)| Z_m-Z_∞|^2=0.
By now, we have completed the proof of the first part of Theorem <ref>, and the second part directly follows from the triangle inequality.
§ NUMERICAL EXPERIMENTS
In this section, we aim to carry out systematic numerical experiments to illustrate the properties of the unified numerical method for the Bayesian inversion problems that we have constructed. In particular, we aim to show that the method is able to produce uniformly accurate parameter inferences with respect to the physical index m and the noise level σ as well as a quantitative study of the numerical error with various sample sizes.
§.§ Numerical tests setting
In our numerical experiments, we consider the tumor growth model in 2D:
{[ ∂_t ρ + ∇· (ρ𝐯) = h( 𝐱)ρ, 𝐱∈Ω=[a,b]× [a,b],; ρ(𝐱,0) = ρ_0(𝐱), 𝐱∈Ω, ].
with no-flux boundary condition ρ𝐯 = 0 for 𝐱∈∂Ω.
Here 𝐯 is determined by the gradient of the pressure
p(𝐱,t) which is related to a power of density ρ(𝐱,t), precisely
𝐯 = - ∇ p, p = m/m-1ρ^m-1, , m>1.
We first introduce how we measure the accuracy of our numerical algorithm. As an illustrative example, let u be the parameter of interest and the posterior samples generated from the Metropolis-Hastings MCMC method is denoted by {u_i}_i=1^N, with N the sample size after 25 % of burn-in phase (we denote M below as the sample size before the burn-in phase). Since the MCMC approach is a sampling method, we need to repeatedly run the simulation and take the average, in order to improve the accuracy of the algorithm. Set the simulation runs to be K (K=15 in our tests), then we estimate the expected value of posterior G by
𝔼(u̅) ≈1/K∑_k=1^K u̅^(k) = 1/K1/N∑_k=1^K ∑_i=1^N u_i^(k),
where {u_i^(k)}_i=1^N are the posterior samples obtained by k-th simulation run for the MCMC algorithm, and
u̅^(k) = 1/N∑_i=1^N u_i^(k)
is the corresponding estimator for the mean value.
To compare the distance between 𝔼(u̅) and the true data u^∗ which is assumed known, the mean squared error is evaluated as the following:
MSE := 𝔼[ (u̅ - u^∗)^2 ] ≈1/K∑_k=1^K (u̅^(k) - u^∗)^2.
§.§ Numerical experiments
§.§.§ Test 1
In this test, we assume that the growth rate h is spatially homogeneous, and it is only the unknown parameter to be inferred. Let the computational domain be
Ω = [-2.2,2.2]× [-2.2,2.2], set the spatial step Δ x = Δ y = 0.1 and temporal step Δ t=0.005.
In all of our tests, the Gaussian noise is assumed to follow the distribution N(0,σ^2), and we set
m=40 unless otherwise specified.
Test 1 (a)
Consider the initial data
ρ(x,y,0) =
0.9, √(x^2 + y^2) - 0.5 - 0.5sin(4arctan(y/x)) <0,
0, otherwise,
for (x,y)∈Ω.
Assume the prior distribution for the constant growth rate h is the Gaussian distribution N(μ, c_0^2) with μ=c_0=0.5.
Let the true h^∗=1, and the observation data be the density at time T=0.5 added by the Gaussian noise.
In Table <ref>, we fix the physical index m=40 and the number of iterations M=1000, while letting the noise level σ vary. One can observe the accuracy is improved as σ decreases, with the level of mean square error of O(10^-1) to O(10^-3). This also implies as the noise level is relatively small, the numerical method correctly captures the quantity of interest with satisfactory accuracy.
In Table <ref>, for different σ we test by adopting different numbers of sampling iterations M. As M increases from 100 to 800, the level of mean square errors decreases from O(10^-1) to O(10^-2), which is expected due the to decrease of the sampling error.
In Fig. <ref>, we plot the histogram for the posterior samples for the parameter h and see how the data is accumulated around the true value h^∗=1. A comparison between the prior and posterior distributions for h is shown on the right, with the prior as the Gaussian distribution.
Test 1 (b)
In this test, we consider the observation data as the density convoluted with Gaussian functions plus noise, which are to model the blurry and noisy observations.
The centers of the Gaussian functions are chosen to be the grid points (x_i,y_j), where
i∈{16, 20, 22, 24, 24, 26, 27, 28, 32}, j ∈{20, 24, 30, 26, 30, 15, 20, 30, 25},
and the standard deviation of the noise is 0.1. The prior distribution for h is assumed as the Gaussian distribution
N(μ, c_0^2) with μ=c_0=0.5. Other settings are the same as in Test 1 (a), and we fix the sample size M=800 in all tests of Test 1 (b).
In the following, we further investigate the numerical performance of the proposed method for different physical indexes m and noise levels σ. In the upper panel of Table <ref>, we let m=40 and test on different σ; in the lower panel, we fix σ=0.25 and make m vary. To help interpret the numerical results, we plot in Fig. <ref> the posterior distributions for different σ while fixing m=40, and for different m while fixing σ=0.25.
We observe that from the left panel of Fig. <ref> that as σ decrease, the posterior distribution contacts to be more peaked while its center is moving towards the true value. And our numerical results give a faithful representation of such a contracting behavior of the posterior distribution: as the variance and the bias of the posterior decreases, the mean squared error of the estimator decreases accordingly.
When the physical index m changes, we observe from the right panel of Fig. <ref> that the posterior does not exhibit a clear trend, however, their profiles do not differ much either. Such an observation confirms our analysis of the convergence behavior of the posterior distributions, and our numerical results also show comparable accuracy although the observation data are actually different for those models. Recall that, given the unknown the forward models generate different results even in the absence of noise. In addition, we have only assumed that the noises added to these models share the same statistical properties.
§.§.§ Test 2
In Test 2, we consider multi-dimensional random parameters that contain the constant growth rate h and spatial centers of the initial density c_1, c_2. Let the initial data given by
ρ(x,y,0) =
0.9, √((x-c_1)^2 + (y-c_2)^2) - 0.5 - 0.5 sin(4arctan(y-c_2x-c_1)) <0,
0, otherwise,
for (x,y)∈Ω. For the prior distributions, we assume the constant growth rate h follow the uniform distribution on [0.5,0.8], while c_1 and c_2 follow the uniform distribution on [-0.5, 0.5]. Let the underlying true data h^∗=0.6, c_1^∗=0.2, c_2^∗=-0.3, and the observation data be the density obtained at final time T=0.5, added by the Gaussian noise. In Test 2, we let M=600 and m=40 unless otherwise specified.
Note that in this case, the sampling space is three-dimensional, and we can no longer expect the posterior distributions to have simple asymptotic behavior as σ or m varies. But still, our results below show that we are able to obtain accurate results for a large range of parameter combinations.
In the upper panel of Table <ref>, we fix m=40 and vary σ; in the lower panel, we set σ=0.1 and let m change. In both cases, the mean square errors for h and c_1, c_2 all remain at the level of O(10^-3) to O(10^-2). A similar conclusion can be drawn as before: our algorithm is uniformly accurate with respect to both σ and m.
In Fig. <ref>, we plot the histogram of posterior samples for parameters h and c_1. One can notice that with a finite noise level σ, the “center” of the distribution for the posterior samples may not be close to the underlying true data which is given by h^∗=0.6 and c_1^∗=0.2. Comparing the two examples with σ=0.5 and σ=0.02, one can observe that the smaller the σ is, the closer and more concentrated the samples are towards the true data for h and c_1.
§.§.§ Test 3
In Test 3, we consider the case when the growth function h is spatially dependent and owns the truncated form of (<ref>) given by
h(𝐱) = h_0(𝐱) + ∑_i=1^3 γ_i ζ_i ϕ_i(𝐱).
Let the observation data be the density at the final time T=0.5, added by the Gaussian noise. Let the computational domain be Ω̃= [-0.5,2.5]×[-0.5,2.5], and 𝐱 = (x, y)∈Ω̃. We set h_0=2 and the initial data given by
ρ(x,y,0) =
0.9, (x-1)^2 + (y-1)^2 < 0.3,
0, otherwise.
Now we define g_i := γ_i ζ_i and consider g_i as the random variables. Note that i=1, 2, 3 corresponds to i_1=i_2=1; i_1=1, i_2=2 and i_1=2, i_2=1 respectively.
Let {ζ_i} be i.i.d. random variables that follows ζ_i ∼ N(0,1). We choose
ϕ_i (i_1, i_2) = sin(i_1 π x_1) sin(i_2 π x_2), γ_i(i_1, i_2) = 1/λ_i(i_1, i_2)= 1/π^2 (i_1^2 + i_2^2).
Thus γ_1 = 1/2π^2, γ_2 = γ_3 = 1/5 π^2. Let the true data for ζ = (0.5, 0.3, 0.2), then the true data for the random variable g = (0.0253, 0.0061, 0.0041), also we assume the prior distribution for g_i follow the Gaussian N(0, c_i^2) with c_1=0.04, c_2=0.02 and c_3=0.01.
In this test, since h(𝐱) is spatially dependent, we approximate the expected value and mean squared error by using the following formulas:
𝔼(h̅(𝐱)) ≈1/K∑_k=1^K h̅^(k)(𝐱) = 1/K1/N∑_k=1^K ∑_i=1^N h_i^(k)(𝐱),
MSE: = 𝔼[ h̅(𝐱) - h^∗(𝐱)_L^2^2] ≈1/K∑_k=1^K h̅^(k)(𝐱) - h^∗(𝐱)_L^2^2,
where h^∗(𝐱) is the true data for h(𝐱), shown on the left-hand-side of Fig.<ref>, and h̅^(k) is defined in (<ref>). In all tests of Test 3, we let the sample size M=500.
In the upper panel of Table <ref>, we fix m=40 and change σ; in the lower panel, we set σ=0.25 and let m change. One can observe a uniform accuracy in both cases of varying m and σ, since the mean square errors remain at the level of as small as O(10^-3) to O(10^-5).
In Fig. <ref>, on the left we plot the true h(x,y) function; on the right we compare the prior and posterior means of h(x,y) which are computed pointwisely at each mesh point (x,y) in the domain.
In Fig. <ref>, for different choices of m (m=5 or 50), we plot w the density solution at time T=0.5, by using the posterior mean of h(x,y) at each position (x,y)∈Ω. We observe that, with different pressure laws indexed by m, the density profiles, as well as their free boundaries, show noticeable discrepancies. However, our numerical method generates accurate inferences of the growth rate functions in both cases as they also deviate from the true data by a small amount shown in the lower panels of Fig. <ref>.
§ CONCLUSION AND FUTURE WORK
In this paper, we investigate the data assimilation problem for a family of tumor growth models that are represented by porous-medium type equations, which is indexed by a physical parameter m∈[2,∞) characterizing the constitutive relation between the pressure and density. We employ the Bayesian framework to infer parametric and nonparametric unknowns that affect tumor growth from noisy observations of tumor cell density. We establish the well-posedness and stability theories for the whole family of Bayesian inversion problems. Additionally, to guarantee the posterior has unified behavior concerning the constitutive relations, we further prove the convergence of the posterior distribution in the limit referred to as the incompressible limit, m →∞. These theoretical findings guide us in the development of the numerical inference method for the unknowns. We propose a general computational framework for such inverse problems, which encompasses a typical sampling algorithm and an asymptotic preserving solver for the forward problem. We verify through extensive numerical experiments that our proposed framework provides satisfactory and unified accuracy in the Bayesian inference of the family of tumor growth models.
Finally, we conclude our paper by outlining potential directions for future research. We propose that at least three worthwhile directions merit further exploration. Firstly, we will further employ the real experimental data like that in <cit.> for the data assimilation problems of such tumor growth models. Secondly, in this paper, m is assumed to be a known parameter, but it remains interesting to explore the possibility of inferring the index m as well as other unknowns in the model. Thirdly, we may study the Bayesian inversion for other problems that possess nontrivial asymptotic limits. We save these topics for future studies.
§ ACKNOWLEDGMENTS
The work of Y.F. is supported by the National Key R&D Program of China, Project Number 2021YFA1001200. The work of L.L. is supported by the National Key R&D Program
of China, Project Number 2021YFA1001200, the start-up grant of CUHK, Early Career Scheme 2021 (No. 24301021) and General Research Fund 2022 (No. 14303022) both by Research Grants Council of Hong Kong. The work of Z.Z. is supported by the National Key R&D Program of China, Project Number 2021YFA1001200, and NSFC grant number 12031013, 12171013. We thank Xu'an Dou for the help in numerical simulations, and Min Tang for the helpful discussions.
§ APPENDIX
We give a summary of the numerical discretization studied in <cit.>. A time-splitting method based on prediction-correction is proposed:
{[ ∂_tρ + ∇·(ρ𝐮) = ρ G(c) ,; ∂_t u = m ∇(ρ^m-2(∇· (ρ𝐮) - ρ G(c))), ] {[ ∂_t ρ=0; ∂_t 𝐮=-1/ε^2(𝐮+m/m-1∇ρ^m-1) . ]..
Given (ρ^n, 𝐮^n), one solves the left system in (<ref>) for one time step and obtains the intermediate values
(ρ^∗, 𝐮^∗), then solve the second system in (<ref>) to get (ρ^n+1, 𝐮^n+1).
When ε→ 0, the second system in (<ref>) reduces to
∂_t ρ=0, 𝐮(x, t)=-m/m-1∇ρ^m-1(x, t).
In this projection step, notice that ρ^∗ = ρ^n+1. The time-splitting method for the fully relaxed system becomes
{[ ∂_t ρ+∇·(ρ𝐮)=ρ G(c),; ∂_t 𝐮=m ∇(ρ^m-2(∇·(ρ𝐮)-ρ G(c))), ].
𝐮(x, t)=-m/m-1∇ρ^m-1(x, t) .
An implicit-explicit temporal discretization for the system (<ref>) is given as follows:
𝐮^n*-𝐮^n/Δ t =m ∇((ρ^n)^m-2(∇·(ρ^n 𝐮^n *)-ρ^n G(c^n, p(ρ^n)))),
ρ^n+1-ρ^n/Δ t =-∇·(ρ^n 𝐮^n *)+ρ^n+1 G(c^n, p(ρ_n)),
𝐮^n+1 =-m/m-1∇(ρ^n+1)^m-1.
Each of the equation above can be solved consecutively, which means that nonlinear solver is not needed in implementing the scheme. For the spatial discretization, we refer to <cit.> for details.
In the 1D case, staggered grid for u and regular grid for ρ is used, namely
ρ_i(t)=1/Δ x∫_x_i-1/2^x_i+1/2ρ(x,t) dt,
u_i+1/2(t)=u(x_i+1/2,t).
In the 1D case, the space discretization for u^n* in (<ref>) is by the centered finite difference method,
u_i+1/2^n* - u_i+1/2^n/Δ t =
m/Δ x{ (ρ_i+1^n)^m-2( ρ_i+3/2^n u_i+3/2^n * - ρ_i+1/2^n u_i+1/2^n */Δ x - ρ_i+1^n G_i^n)
- (ρ_i^n)^m-2( ρ_i+1/2^n u_i+1/2^n * - ρ_i-1/2^n u_i-1/2^n */Δ x - ρ_i^n G_i^n)
},
where G_i^n≈ G(x_i, nΔ t) and the half grid values of ρ are taken as
ρ_i+1/2^n = ρ_i^n + ρ_i+1^n/2.
In the second step of (<ref>), we use central scheme to discretize it. More specifically,
ρ_i^n+1-ρ_i^n/Δ t +
F_i+1/2^n - F_i-1/2^n/Δ x = ρ_i^n+1 G_i^n,
where the flux is given by
F_i± 1/2^n = 1/2[ ρ^Ln u^n* + ρ^R n u^n* - |u^n*| (ρ^Rn - ρ^Ln)]_i± 1/2,
and ρ_i± 1/2^Ln or ρ_i± 1/2^Rn
are edge values constructed as below.
On the cell [x_i-1/2, x_i+1/2], let
ρ_i^n(x) ≈ρ_i^n + (∂_xρ)_i^n (x-x_i).
At the interface x_i+1/2, the two approximations are given from the left or from the right, i.e.,
ρ_i+1/2^Ln = ρ_i^n + Δ x/2(∂_xρ)_i^n, ρ_i+1/2^Rn = ρ_i+1^n - Δ x/2(∂_xρ)_i+1^n,
where (∂_xρ)_i is computed by the minmod limiter <cit.>.
In the correction step of (<ref>), the centered difference approximation is employed, i.e.,
u_i+1/2^n+1 = -m/m-1(ρ_i+1^n+1)^m-1 - (ρ_i^n+1)^m-1/Δ x.
For the high-dimensional cases, the extension is straightforward and is thus omitted in this paper. Readers may refer to <cit.> for the explicit construction of the 2D schemes.
9
araujo2004history Araujo, R.P., McElwain, D.L.S. A history of the study of solid tumour growth: The contribution of mathematical modelling. Bull. Math. Biol. 66, 1039–1091 (2004).
benilan1996singular Bénilan P, Igbida N. Singular limit of perturbed nonlinear semigroups[J]. Comm. Appl. Nonlinear Anal, 1996, 3(4): 23-42.
byrne2006modelling Byrne HM, Alarcon T, Owen MR, Webb SD, Maini PK. Modelling aspects of cancer dynamics: a review. Philos Trans A Math Phys Eng Sci. 2006 Jun 15;364(1843):1563-78.
roose2007mathematical Roose T, Chapman S J, Maini P K. Mathematical models of avascular tumor growth[J]. SIAM review, 2007, 49(2): 179-208.
cristini2003nonlinear Cristini V, Lowengrub J, Nie Q. Nonlinear simulation of tumor growth. J Math Biol. 2003 Mar;46(3):191-224.
cristini2010multiscaleCristini, V., & Lowengrub, J. (2010). Multiscale Modeling of Cancer: An Integrated Experimental and Mathematical Modeling Approach. Cambridge: Cambridge University Press.
cristini2017introduction Cristini, V., Koay, E., & Wang, Z. (2016). An Introduction to Physical Oncology: How Mechanistic Mathematical Modeling Can Improve Cancer Therapy Outcomes (1st ed.). Chapman and Hall/CRC.
cruz2006applications Cruz J A, Wishart D S. Applications of machine learning in cancer prediction and prognosis[J]. Cancer informatics, 2006, 2: 117693510600200030.
dashti2017bayesian Dashti M, Stuart A M. The Bayesian approach to inverse problems[M]//handbood of uncertainty quantification. Springer, Cham, 2017: 311-428.
david2021free David N, Perthame B. Free boundary limit of a tumor growth model with nutrient[J]. Journal de Mathématiques Pures et Appliquées, 2021, 155: 62-82.
david2022convergence David N, Debiec T, Perthame B. Convergence rate for the incompressible limit of nonlinear diffusion–advection equations[J]. Annales de l'Institut Henri Poincaré C, 2022.
dou2022modelingDou X, Liu J G, Zhou Z. Modeling the autophagic effect in tumor growth: a cross diffusion model and its free boundary limit[J]. preprint, 2020.
falco2023quantifying Falcó C, Cohen D J, Carrillo J A, et al. Quantifying tissue growth, shape and collision via continuum models and Bayesian inference[J]. arXiv preprint arXiv:2302.02968, 2023.
feng2023tumor Feng Y, Tang M, Xu X, et al. Tumor boundary instability induced by nutrient consumption and supply[J]. Zeitschrift für angewandte Mathematik und Physik, 2023, 74(3): 107.
friedlander2002handbook Friedlander, Susan, and Denis Serre, eds. Handbook of mathematical fluid dynamics. Elsevier, 2002.
friedman2001symmetry Friedman A, Reitich F. Symmetry-breaking bifurcation of analytic solutions to free boundary problems: an application to a model of tumor growth[J]. Transactions of the American Mathematical Society, 2001, 353(4): 1587-1634.
friedman2008stability Friedman A, Hu B. Stability and instability of Liapunov-Schmidt and Hopf bifurcation for a free boundary problem arising in a tumor model[J]. Transactions of the American Mathematical Society, 2008, 360(10): 5291-5342.
garckehilliard Garcke H, Lam K F, Sitka E, et al. A Cahn–Hilliard–Darcy model for tumour growth with chemotaxis and active transport[J]. Mathematical Models and Methods in Applied Sciences, 2016, 26(06): 1095-1148.
greenspan1976growth Greenspan H P. On the growth and stability of cell cultures and solid tumors[J]. Journal of theoretical biology, 1976, 56(1): 229-242. https://doi.org/10.1016/S0022-5193(76)80054-9
guillen2022hele Guillen N, Kim I, Mellet A. A Hele-Shaw limit without monotonicity[J]. Archive for Rational Mechanics and Analysis, 2022, 243(2): 829-868.
he2022incompressible He Q, Li H L, Perthame B. Incompressible limits of Patlak-Keller-Segel model and its stationary state[J]. arXiv preprint arXiv:2203.13709, 2022.
igbida2021a Igbida N. L^ 1- Theory for Incompressible Limit of Reaction-Diffusion Porous Medium Flow with Linear Drift[J]. arXiv preprint arXiv:2112.10411, 2021.
igbida2021b Igbida N. L^ 1-Theory for reaction-diffusion Hele-Shaw flow with linear drift[J]. arXiv preprint arXiv:2105.00182, 2021.
jacobs2022tumor Jacobs M, Kim I, Tong J. Tumor growth with nutrients: Regularity and stability[J]. arXiv preprint arXiv:2204.07572, 2022.
kahle2020parameter Kahle C, Lam K F. Parameter identification via optimal control for a Cahn–Hilliard-chemotaxis system with a variable mobility[J]. Applied Mathematics & Optimization, 2020, 82(1): 63-104.
kahle2019bayesian Kahle C, Lam K F, Latz J, et al. Bayesian Parameter Identification in Cahn–Hilliard Models for Biological Growth[J]. SIAM/ASA Journal on Uncertainty Quantification, 2019, 7(2): 526-552.
kim2016free Kim I C, Perthame B, Souganidis P E. Free boundary problems for tumor growth: a viscosity solutions approach[J]. Nonlinear Analysis, 2016, 138: 207-228.
kim2018porous Kim I, Požár N. Porous medium equation to Hele-Shaw flow with general initial density[J]. Transactions of the American Mathematical Society, 2018, 370(2): 873-909.
kostelich2011accurate Kostelich E J, Kuang Y, McDaniel J M, et al. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors[J]. Biology direct, 2011, 6(1): 1-20.
kourou2015machine Kourou K, Exarchos T P, Exarchos K P, et al. Machine learning applications in cancer prognosis and prediction[J]. Computational and structural biotechnology journal, 2015, 13: 8-17.
lipkova2019personalized Lipkova J, Angelikopoulos P, Wu S, et al. Personalized radiotherapy design for glioblastoma: integrating mathematical tumor models, multimodal scans, and Bayesian inference[J]. IEEE transactions on medical imaging, 2019, 38(8): 1875-1884.
jianguo2018 Liu, Jian-Guo; Tang, Min; Wang, Li; Zhou, Zhennan. An accurate front capturing scheme for tumor growth models with a free boundary limit. J. Comput. Phys. 364 (2018), 73–94.
jianguo2021 Liu, Jian-Guo; Tang, Min; Wang, Li; Zhou, Zhennan. Toward understanding the boundary propagation speeds in tumor growth models. SIAM J. Appl. Math. 81 (2021), no. 3, 1052–1076.
liu2014patient Liu Y, Sadowski S M, Weisbrod A B, et al. Patient specific tumor growth prediction using multimodal images[J]. Medical image analysis, 2014, 18(3): 555-566.
lowengrub2009nonlinear Lowengrub JS, Frieboes HB, Jin F, Chuang YL, Li X, Macklin P, Wise SM, Cristini V. Nonlinear modelling of cancer: bridging the gap between cells and tumours. Nonlinearity. 2010;23(1):R1-R9.
lu2020complex Lu MJ, Liu C, Lowengrub J, et al. Complex far-field geometries determine the stability of solid tumor growth with chemotaxis[J]. Bulletin of mathematical biology, 2020, 82(3): 1-41.
lu2022nonlinear Lu MJ, Hao W, Liu C, et al. Nonlinear simulation of vascular tumor growth with chemotaxis and the control of necrosis[J]. Journal of Computational Physics, 2022, 459: 111153.
nolen2012multiscale Nolen J, Pavliotis G A, Stuart A M. Multiscale modelling and inverse problems[J]. Numerical analysis of multiscale problems, 2012: 1-34.
perthame2014hele Perthame B, Quirós F, Vázquez J L. The Hele–Shaw asymptotics for mechanical models of tumor growth[J]. Archive for Rational Mechanics and Analysis, 2014, 212: 93-127.
perthame2016some Perthame B. Some mathematical models of tumor growth[J]. Université Pierre et Marie Curie-Paris, 2016, 6. https://www.ljll.math.upmc.fr/perthame/cours-M2.pdf
pham2018nonlinear Pham, K., Turian, E., Liu, K. et al. Nonlinear studies of tumor morphological stability using a two-fluid flow model. J. Math. Biol. 77, 671–709 (2018). https://doi.org/10.1007/s00285-018-1212-3
selvanambi2020lung Selvanambi R, Natarajan J, Karuppiah M, et al. Lung cancer prediction using higher-order recurrent neural network based on glowworm swarm optimization[J]. Neural Computing and Applications, 2020, 32: 4373-4386.
subramanian2020did Subramanian S, Scheufele K, Mehl M, et al. Where did the tumor start? An inverse solver with sparse localization for tumor growth models[J]. Inverse problems, 2020, 36(4): 045006.
villani2002review Villani C. A review of mathematical topics in collisional kinetic theory[J]. Handbook of mathematical fluid dynamics, 2002, 1(71-305):3-8.
weinan2011principles Weinan E. Principles of multiscale modeling[M]. Cambridge University Press, 2011.
zhang2019spatio Zhang L, Lu L, Wang X, et al. Spatio-temporal convolutional LSTMs for tumor growth prediction by learning 4D longitudinal patient data[J]. IEEE transactions on medical imaging, 2019, 39(4): 1114-1126.
CF12 Bessemoulin-Chatard M and Filbet F, A finite volume scheme for nonlinear degenerate parabolic equations, SIAM J. Sci. Comput., 2012, 34: B559-B583.
|
http://arxiv.org/abs/2306.08375v1
|
20230614090639
|
Verification of NP-hardness Reduction Functions for Exact Lattice Problems
|
[
"Katharina Kreuzer",
"Tobias Nipkow"
] |
cs.CC
|
[
"cs.CC",
"68V20",
"F.2.m"
] |
K. Kreuzer and T. Nipkow
Technical University of Munich
Boltzmannstr. 3, 85748 Garching, Germany
Verification of NP-hardness Reduction Functions for Exact Lattice Problems
This work was supported by the Research Training Group GRK 2428 CONVEY of the
German Research Council (DFG).
Katharina Kreuzer0000-0002-4621-734X
Tobias Nipkow0000-0003-0730-515X
July 31, 2023
========================================================================================================================================================================================
This paper describes the formal verification of NP-hardness reduction functions
of two key problems relevant in algebraic lattice theory:
the closest vector problem
and the shortest vector problem, both in the infinity norm. The formalization
uncovered a number of problems with the existing proofs in the literature.
The paper describes how these problems were corrected in the formalization.
The work was carried out in the proof assistant Isabelle.
§ INTRODUCTION
In recent years, algebraic lattices have received increasing attention for their use in post-quantum cryptography.
Algebraic lattices are additive, discrete subgroups of ℝ^n, i.e. a set of points in ℝ^n with certain structures. One can also define lattices over finite fields, rings or modules as used in many modern post-quantum crypto systems such as the CRYSTALS suites, NTRU and Saber.
Two problems form the very basis for computationally hard problems on lattices, namely the closest vector problem (CVP) and the shortest vector problem (SVP).
Given a finite set of basis vectors in ℝ^n, the set of all linear combinations with integer coefficients forms a lattice. In optimization form, the SVP asks for the shortest vector in the lattice and the CVP asks for the lattice vector closest to some given target vector, both with respect to some given norm.
When working over the reals, the p-norm (for p≥ 1) is defined as √(∑_i |x_i|^p).
The most common examples are the Euclidean norm ‖ x‖_2 and the infinity norm ‖ x‖_∞ = max_i { |x_i| }, which is the limit for p→∞.
We have formalized, corrected and verified a number of NP-hardness proofs from the literature, uncovering a number of mistakes along the way.
The first NP-hardness proof of the CVP and SVP in infinity norm is due to van Emde-Boas <cit.>.
For other norms (especially for the Euclidean norm), there is only a randomized reduction for the NP-hardness of the SVP so far <cit.>. For the CVP, NP-hardness has been shown in any p-norm for p≥ 1. One exemplary proof can be found in the book by Micciancio and Goldwasser <cit.>.
The CVP and SVP were the starting point for lattice-based post-quantum cryptography <cit.>. Moreover, the relevance of these problems can also be seen from the rich literature on approximation results. For example, the LLL-algorithm by Lenstra, Lenstra and Lovász <cit.> gives a polynomial-time algorithm for lattice basis reduction which solves integer linear programs in fixed dimensions. Using this reduced basis, one can find good approximations to the CVP using Babai's algorithm <cit.> for certain approximation factors.
Still, for arbitrary dimensions, the problem remains NP-hard.
Further approximation results for the CVP, SVP and integer programming can be found elsewhere <cit.>.
These approximation problems are used in cryptography.
However, we will focus on the exact CVP and SVP in this paper.
A number of more basic NP-hardness proofs have been formalized in several theorem provers so far. For example, there are formalizations of the Cook-Levin Theorem in Coq <cit.> and Isabelle <cit.>. Formalizing Karp's 21 NP-hard problems (including the Subset Sum and Partition Problems assumed to be NP-hard in this paper) in Isabelle is an ongoing project.
§.§ Contributions
In this paper we present NP-hardness proofs of the CVP and SVP in infinity norm that have been verified in a proof assistant.
We roughly follow the book by Micciancio and Golwasser <cit.> and the report by van Emde-Boas <cit.>. However, many problems with the original proofs were encountered during the formalization efforts. We will have a look at different approaches and their advantages or problems.
We also verified the proof of NP-hardness of the CVP for any finite p ≥ 1 from the book by Micciancio and Goldwasser.
This verification did not uncover any problems with the informal proof. Thus we do not discuss it in detail.
These formalizations were carried out with the help of the proof assistant https://isabelle.in.tum.de/index.htmlIsabelle <cit.> and are available online <cit.>. They comprise 5200 lines.
To the authors knowledge, they are the first formalizations of hardness proofs for lattice problems.
Because of the importance of the SVP and CVP and the problems in existing proofs, we consider our proofs a contribution to the foundations of verified cryptography. However, we do not claim that these hardness results directly imply quantum-resistance of any lattice-based cryptosystems.
§.§ Overview
The paper is structured as follows.
Section <ref> introduces the foundations. The rest of the paper is dedicated to the proofs, which are phrased as the following two polynomial time reduction chains:
* Subset Sum ≤_p CVP
* Partition ≤_p Bounded Homogeneous Linear Equations ≤_p SVP
Subset Sum and Partition are famous fundamental problems whose NP-hardness has been proved many times in the literature and which we take for granted.
Section <ref> presents the reduction of Subset Sum to the CVP. Differences between our formalization and the book by Micciancio and Goldwasser <cit.> are presented with examples that demonstrate problems with the original proof. Moreover, an example is given why the generalization to the SVP given in <cit.> does not work.
Therefore we turn to the early proof of NP-hardness of the SVP by van Emde Boas <cit.>.
This proof uses the Bounded Homogeneous Linear Equations problem (BHLE) which is introduced in Section <ref>.
The formalization of this proof is one of the major achievements in this paper. It posed a significant challenge since it often relied on human intuition and had to be restructured appropriately to allow a formal proof. The main proof steps are explained and difficulties in the formalization effort are described.
This proof only works in infinity norm and we explain why.
In Section <ref>, the reduction from BHLE to the SVP is given. Again, this proof was quite elaborate to formalize as there were inaccuracies and a lot of intuition was involved. Differences between the formal proof and <cit.> are explained by examples.
In Section <ref>, we have a quick look at the reduction proof for the CVP in p-norm (for finite p≥ 1).
In the case of the SVP there only exists a randomized hardness proof in Euclidean norm by Ajtai <cit.> up to now.
Finally, the time complexity of the reduction functions are considered in Section <ref>.
We conclude the paper with a short summary and outlook.
§ FOUNDATIONS
This section introduces known foundations mainly to fix the terminology and notation: problem reductions, lattices, and the combinatorial problems under consideration (CVP, SVP, Partition and Subset Sum).
§.§ Problem Reductions
Formally, a decision problem is given by the set of YES-instances P and a set Γ of problem instances, where P⊆Γ.
We often associate the decision problem with the set of YES-instances, when the instance set Γ is obvious and not explicitly defined.
In this paper we will often phrase problems informally (e.g. “decide if p is prime”) rather than give them explicitly as sets.
For example, the decision problem “decide if a natural number p is prime” will be formalized in the following way: the set of problem instances is Γ = ℕ (in Isabelle these are all elements of type nat); and the YES-instances are P = { p∈ℕ| p is prime} (in Isabelle this is a set of type nat set).
Let A ⊆Γ and B ⊆Δ be two problems.
A function f: Γ→Δ is a reduction from A to B if it fulfills the following properties:
* ∀ a∈Γ. a∈ A ⇔ f(a) ∈ B
* f can be computed in polynomial time
If A is NP-hard, a reduction to B proves NP-hardness of B.
In this paper we present reduction functions informally (e.g. “an a is reduced to a b that is constructed like this”) and often with copious amounts of “…” to construct vectors etc. Of course in the formalization these reduction functions are spelled out in complete detail.
Since all operations used in the reduction functions in this paper are elementary, the polynomial time property has not been formalized but is briefly discussed in Section <ref>.
The focus of our paper are the proofs a ∈ A ⇔ f(a) ∈ B.
§.§ Lattice-based Computational Problems
To have a better understanding, we will first introduce lattices as such. Lattices are a structured set of points. They form an additive, discrete subgroup of ℝ^n. Formally, we define the following.
Let A = {a_1,…,a_n}⊂ℝ^n be a set of linearly independent vectors. Then the integer span of A forms a lattice ℒ, that is:
ℒ = {∑_i=1^n c_ia_i | c_i∈ℤ}
Examples of lattices in ℝ^n can be found in Appendix <ref>.
In the rest of the text and in the formalization we restrict to finite bases over ℤ (instead of ℝ), simply for computability reasons. Of course bases over ℚ can be transformed into bases over ℤ by scaling all basis vectors.
The starting point of most known hard problems on lattices are the shortest vector problem and the closest vector problem. They are defined below (as usual in decision and not in optimization form).
The lattice ℒ⊆ℤ^n is assumed to be generated by a finite basis in ℤ^n.
Given a lattice ℒ, a vector b∈ℤ^n and an estimate k, decide whether there exists a vector v∈ℒ such that
‖ v-b‖≤ k
Given a lattice ℒ and an estimate k, determine whether there exists a vector v∈ℒ such that
‖ v‖≤ k and v≠ 0
Examples of CVP and SVP instances can be found in Appendix <ref>.
§.§ Partition and Subset Sum Problems
Recall that we plan to prove NP-hardness of the CVP and SVP in the case of the infinity norm by reducing the well-studied NP-complete Subset Sum and Partition problems to the CVP and SVP.
We state the definitions.
Given a finite list of integers a_1,…,a_n, does there exist a partition of {1… n} into subsets I and {1… n}∖ I such that
∑_i∈ I a_i = ∑_i ∈{1… n}∖ I a_i
The Partition problem can be seen as a special case of the Subset Sum problem.
Given a finite list of integers a_1,…, a_n and an integer s, decide whether there exists a subset S of {1… n} such that
∑_i∈ S a_i = s
§.§ Notation
Throughout the paper we use traditional mathematical notation,
in particular the graphical “...”. The formal Isabelle notation is by necessity more verbose (and precise).
Our formalization employs both lists and vectors as a type for finite sequences and converts between them where necessary.
For reasons of presentation we blur this distinction in the paper.
§ CVP
In this section, we formalize the proof of the NP-hardness of the CVP in the infinity norm along the lines of <cit.> by reducing Subset Sum to the CVP.
An instance a_1,…,a_n,s of Subset Sum
is mapped to the following instance of the CVP:
ℒ =
[ a_1 ⋯ a_n; a_1 ⋯ a_n; 2 0; ⋱ ; 0 2; ]·ℤ^n
b =
[ s-1; s+1; 1; ⋮; 1; ] k=1
We proved the following theorem:
The above mapping is a reduction from the Subset Sum problem to the CVP (in infinity norm).
This implies that the CVP (in infinity norm) is an NP-hard problem.
The reduction function used by Micciancio and Goldwasser <cit.> actually looks a bit different. The image of a_1,…,a_n,s would be
B = [ a_1 ⋯ a_n; 2 0; ⋱ ; 0 2; ] ℒ =
B·ℤ^n
b =
[ s; 1; ⋮; 1; ] k=1
However, the proof in <cit.> with this reduction function works only for p<∞. It goes along the lines of the following idea:
Take k = √(n). In the case of p = ∞, we get
k = lim_p→∞√(n) = 1.
Then we can formulate the following equality (equation (3.5) in <cit.>):
‖ Bx-b‖ ^p_p = | ∑ _i=1^n a_ix_i-s | ^p + ∑_i=1^n |2x_i-1|^p
Given a YES-instance a_1,…,a_n,s of Subset Sum, there exists a vector x = (x_1,…,x_n) ∈{0,1}^n, such that ∑ _i=1^n a_ix_i-s = 0 and |2x_i-1| = 1. Then ‖ Bx-b‖ ^p_p = n which proves this case.
Given a YES-instance of the CVP defined by ℒ, t and k that are the image of a_1,…,a_n,s under the reduction function as in (<ref>), we get ‖ Bx-b‖^p_p≤ n. Since all values are integers, we have |2x_i-1| ≥ 1. It follows that ∑ _i=1^n a_ix_i-s = 0 and |2x_i-1| = 1. Thus, we can deduce that a_1,…, a_n, s was indeed a YES-instance of Subset Sum.
The major problem we encountered was that this proof works fine for p<∞ but for p=∞, the sum in (<ref>) becomes a maximum instead.
The equation then reads
‖ Bx - b‖_∞ = max(| ∑ _i=1^n a_ix_i-s |, |2x_i-1| for 1≤ i ≤ n)
This invalidates the arguments in the proof since | ∑ _i=1^n a_ix_i-s | can now be in the range {-1,0,1}. The constraints are too lax to ensure the equality to zero.
A solution was to alter the matrix and target vector and add another entry. The matrix and target vector we used are
given in equation (<ref>).
The alternation to s-1 and s+1 forces a linear combination of the a_i to be exactly s in the hardness proof, since
|∑_i c_i a_i - (s± 1)|≤ 1.
After communicating with Daniele Micciancio, one of the authors of <cit.>, he suggested using a constant c > 1 and the generating instance
ℒ =
[ c· a_1 ⋯ c· a_n; 2 0; ⋱ ; 0 2; ]·ℤ^n
b =
[ c· s; 1; ⋮; 1; ] k=1
This solves the problem as well and can be implemented using e.g. c=2. This technique is described later in the book <cit.> when trying to explain the NP-hardness proof for the SVP in the infinity norm.
§.§ Towards the SVP
The authors of <cit.> argue that the reduction argument of the SVP can be deduced generating an instance of the SVP using the Subset Sum instance a_1,…,a_n,s in the following way. For c>1, e.g. c=2, take
B =
[ c· a_1 ⋯ c· a_n c· s; 2 0 1; ⋱ 1; 0 2 1; ] ℒ =
B·ℤ^n+1 k=1
The authors claim that every shortest vector in the image of the reduction function has -1 as last coefficient. For example, let a YES-instance of the SVP be defined by the generating matrix B of the lattice
and let x= (x_1,…,x_n,-1)^T be the coefficients such that B x is a shortest vector.
Then we know that
‖ Bx‖_∞ =
| |
[ c· (x_1 a_1 +… + x_n a_n - s); 2x_1 - 1; ⋮; 2x_n - 1; ]| |_∞≤ 1
Since c>1, it follows, that x_1 a_1 +… + x_n a_n - s = 0, which yields a solution for the given Subset Sum instance a_1,…,a_n,s.
However, this reduction does not always work as the following example shows:
Given the Subset Sum instance
(a_1,a_2,a_3,s) = (1,1,1,1).
This is a YES-instance, since a solution is given by
x_1=1, x_2=0 and x_3=0.
The basis matrix of the corresponding SVP would be (with c>1)
B =
[ c c c c; 2 0 0 1; 0 2 0 1; 0 0 2 1; ]
Take for example the vector v = B· (-1,-1,-1,3)^T = (0,1,1,1)^T.
It has infinity norm 1 and is thus a shortest vector in the lattice generated by B.
However, this vector has the last coefficient 3 and not -1, even though it clearly is a shortest vector of the lattice given by B.
The corresponding scaled “solution” for Subset Sum would be (1/3,1/3,1/3,-1) but since only integer values are allowed in the solution space, this is not a solution in our sense.
We consider another example. Let the Subset Sum instance be a_1' = 3, s' = 1. We can easily see that this is not a YES-instance, i.e. there exists no solution. Still, the corresponding SVP instance given via the reduction function is generated by the matrix
B' =
[ c· 3 c· 1; 2 1; ]
In this case the coefficients (-1,3)^T yield a shortest vector in the lattice spanned by B', since
| | B'
[ -1; 3; ]| | _∞ =
| |
[ 0; 1; ]| |_∞≤ 1
Thus, B' defines a YES-instance of the SVP, but the original Subset Sum instance is not a YES-instance.
In <cit.>, it is stated for the infinity norm that any shortest vector yields a solution for the Subset Sum Problem, which is not the case in these examples: we cannot ensure that a shortest vector always has -1 as a last coordinate.
Although the proof in <cit.> does not work out as expected, there is still the reduction proof by van Emde-Boas <cit.>
which reduces a problem called the Bounded Homogeneous Linear Equation problem to the SVP in infinity norm. This will be discussed in the next two sections.
§ BOUNDED HOMOGENEOUS LINEAR EQUATIONS
A technical report by Peter van Emde-Boas <cit.> gives another reduction proof for the NP-hardness of the SVP in infinity norm.
The author first reduces the Partition Problem to a problem called Bounded Homogeneous Linear Equation (BHLE) which is then reduced to the SVP.
Given a finite vector of integers b ∈ℤ^n and a positive integer k, decide whether there exists an x∈ℤ^n∖{0} with ‖ x‖_∞≤ k such that
b,x = 0
We have verified a reduction from Partition to BHLE, and thus BHLE is NP-hard.
There is a reduction from Partition to BHLE in infinity norm.
The proof is carefully engineered and rather intricate. Differences to the original proof and problems encountered during the formalization are:
* Our formal proof has a different structure than the proof in the technical report <cit.>.
Indeed, the technical report first proves the reduction of a weaker form of Partition to BHLE and then argues that “omitting” an element yields the desired result as it adds stricter constraints. In the formalization we skip this intermediate step and directly prove the existence of an appropriate reduction function.
* Steps that seem trivial in the technical report often require a long formal proof. What can be reasoned by intuition in a pen-and-paper proof has to be elaborated in the formal proof. Intuition is also sometimes used for hand-waving over small gaps or imprecisions.
* Indexing vectors and lists has been a problem in the formalization. In pen-and-paper proofs, one can argue easily about “omitting” an element of a list even though this is imprecise and often misuses the notation. In the formalization one cannot simply skip an index. All indexing functions in the formalization have to be total. “Omitting” an element can only be solved by re-indexing and re-structuring the lists in the proof.
* Numbers are interpreted in different number systems during the proof.
In contrast to the original proof, the formalization has to explicitly state the digits for a change of basis and show equivalence. This leads to verbose and elaborate proofs.
To make proofs easier, we use the concrete basis d=5 instead of an unspecified basis d>4 as in <cit.>.
Furthermore, the number M must use the absolute values of the a_i (omission in the definition of M in <cit.>). The formal definition is stated below.
* The proof involved many arguments about manipulations of huge sums.
Working with huge sums entails very large proof states where the existing proof automation mostly failed on.
These proof states require detailed (but still readable) proofs and occasional manual instantiation of theorems.
Another possible solution to get smaller proof states is to introduce local abbreviations for subterms.
Let us have a look at the proof and its difficulties in the formalization in more detail.
We start from a Partition instance a = a_1,…, a_n . Note that we ignore the trivial case n=0 in this presentation (but deal with it in the formal proofs) — this means n-1 ≥ 0.
We reduce a to a BHLE instance b as follows:
* Define
M = 2·(∑_i=1^n |a_i|) + 1
* For 1≤ i < n generate a 5-tuple
b_i,1 = a_i + M · (5^4i-4 + 5^4i-3 + 5^4i-1)
b_i,2 = M · (5^4i-3 + 5^4i)
b_i,3 = M · (5^4i-4 + 5^4i-2)
b_i,4 = a_i + M · (5^4i-2 + 5^4i-1 + 5^4i)
b_i,5 = M · (5^4i-1)
b_i = b_i,1,b_i,2,b_i,4,b_i,5,b_i,3
Note that b_i,3 has moved to the last position in b_i.
* For i=n generate only a 4-tuple:
b_n,1 = a_n + M · (5^4n-4 + 5^4n-3 + 5^4n-1)
b_n,2 = M · (5^4n-3 + 1)
b_n,4 = a_n + M · (5^4n-2 + 5^4n-1 + 1)
b_n,5 = M · (5^4n-1)
b_n = b_n,1,b_n,2,b_n,4,b_n,5
Note that
* b_n,3 is omitted from b_n to restrict the constraints necessary for the proof and
* that in b_n,2 and b_n,4 the last summand changes to a +1 in comparison to the other b_i,2 and b_i,4.
In summary, the entry b_i,3 is uniformly in the last position in the b_i but omitted from the final b_n.
The Partition instance a of length n is reduced to a vector b of length 5n-1:
b = (b_1,…,b_n-1,b_n)
The NP-hardness proof now follows in three steps:
* We need to show an auxiliary lemma.
* We show that a YES-instance of Partition is reduced to a YES-instance of BHLE.
* We show that the pre-image of a YES-instance of BHLE is indeed a YES-instance in Partition.
§.§ Auxiliary Lemma
As a first step, the proof needs a short auxiliary lemma from number theory.
Let x, y, c ∈ℤ^n and M be an integer.
Assume that M > ∑_i=1^n |x_i| and that |c_i|≤ 1 for all 1 ≤ i ≤ n. Furthermore, let the following equation hold:
∑_i=1^n c_i · (x_i + M · y_i) = 0
Then we have
c, x = 0 and c, y = 0
In this lemma, we can reinterpret x_i + M · y_i from (<ref>) as a number in basis M with lowest digit x_i. Even with a coefficient c_i, the lowest digit in basis M has to be zero, as well as the rest. By splitting off the lowest digits consecutively, we can show, that indeed all digits in basis M have to equal zero.
§.§ a ∈ Partition ⟹ b ∈ BHLE
This direction is quite easy.
Let a_1,…,a_n be a YES-instance of partition with partitioning set I.
We will show that the following vector x is a solution to the corresponding BHLE:
x = (x_1,…,x_n-1,x_n)
x_i =
1,-1,0,-1,0 i∈ I n-1 ∈ I
0,0,-1,1,1 i∈ I n-1∉ I
0,0,-1,1,1 i∉ I n-1 ∈ I
1,-1,0,-1,0 i∉ I n-1∉ I
1 ≤ i < n
x_n = 1,-1,0,-1
We have to show that ⟨ b, x⟩ = 0.
This is proven by plugging in the definitions and rearranging terms in the sum of the scalar product such that they cancel out.
As a last step in the proof, we need to show that ‖ x‖_∞≤ 1. For the infinity norm this is quite easy.
However, it would not be true for other norms. For p≥ 1 and p<∞ we have for n≥ 1:
‖ x‖_p = √(3n)>1
Thus, the chosen constraints x only work in infinity norm.
The explicit proof can be found in the Appendix <ref>.
§.§ a ∈ Partition ⟸ b ∈ BHLE
This direction is harder.
Let b be a YES-instance of BHLE. That is, there exists a nonzero x such that
b,x = 0 and ‖ x‖_∞≤ 1.
We have to show that there is a partition I on a_1,…, a_n with ∑_i∈ I a_i = ∑_i∈{1… n}\ I a_i.
The proof idea works as follows. First, we apply the auxiliary lemma and get a constraint on the a_i on the one hand, and a condition on the x_i with coefficients that are powers of 5 on the other hand.
Using this condition on the x_i, we generate equational constraints on the entries of x by looking at the digits in basis 5.
We argue that a number equals zero if and only if all its digits are zero.
The generated equations lead to a good characterisation of x, namely the weight w = x_5(n-1)+1.
From the assumption that ‖ x‖_∞≤ 1, we deduce |w|≤ 1.
Again, this step can only be reasoned in the infinity norm. For other p-norms, this argumentation breaks as we need the property |w|≤ 1 to complete the proof.
Using the value of w, we can constuct a partitioning set I with the required property from the equation on the a_i.
The explicit proof can be found in Appendix <ref>.
§ SVP
Knowing that the BHLE is indeed an NP-hard problem, we reduce it to the SVP. Then we can conclude that the SVP in infinity norm is NP-hard.
There is a reduction from BHLE to the SVP in infinity norm.
Again some difficulties were met when formalizing the proof for the above theorem.
First of all, note that the terminology in <cit.>
and nowadays is a bit different. In <cit.>, the shortest vector problem only denotes the shortest vector problem in the Euclidean norm. What we call the shortest vector problem in the infinity norm is named closest vector problem in <cit.>.
To make terminology even more confusing, our understanding of the closest vector problem is called the nearest vector problem in <cit.>.
To make the notation clear, we provide a table for reference in the Appendix <ref>, Figure <ref>.
A more mathematical problem encountered was that the reduction itself used in <cit.> was not entirely correct.
In the reduction two factors k'=k+1 and k” were introduced. These factors should have certain properties to allow the arguments of the reduction proof to go through. However, this is only true when tweaking these factors a bit to make the whole proof watertight. We will now have a closer look.
Given the BHLE instance b = (b_1,…, b_n) and k, create the following SVP instance:
ℒ =
[ 1 0 0; ⋱ ⋮; 0 1 0; - (k+1)· b - k”; ]·ℤ^n
k=k
where k” is the factor in question. In the technical report, we have
k” = 2· (k+1)· (∑_i b_i) +1
The following example however shows that this factor is not enough.
Consider the BHLE instance given by and k=1.
This is a YES-instance, since the vector (1,1) yields the expected properties.
Define the following matrices.
B_0 =
[ 1 0 0; 0 1 0; 2 -2 1; ]
B_1 =
[ 1 0 0; 0 1 0; 2 -2 9; ]
B_2 =
[ 1 0 0; 0 1 0; 6 -6 25; ]
The associated SVP instance is the lattice generated by B_0.
Then the vector (0,0,1)^T with infinity norm 1 is a solution to the SVP instance generated by the basis matrix B_0. However, since the last entry is nonzero, this does not provide a solution for BHLE. Contrary to this example, the proof in the technical report shows that for all SVP solutions the last entry must be zero.
The reason, why the argument in the technical report breaks at this point is because b_1 + b_2 = 0, thus making k” = 1 very small. One step to prevent this is to use the absolute values of the b_i in k” instead.
The new k”_1 we consider is
k”_1 = 2· (k+1)· (∑_i |b_i|) +1
With this new factor k”_1 we get the generating matrix B_1
and the vector (0,0,1) is no longer a shortest vector.
Still, this is not enough. Consider the same b=(1,-1) as above, but let k=5.
Then we get B_2 as the generating matrix of the SVP lattice.
The vector x=(0,5,1)^T is a shortest vector whose last entry is nonzero. Again it contradicts the proof in the technical report.
The reason this time is the following: the argument that and k”_1 have different relative sizes fails.
Indeed, we have
||
[ 1 0 0; 0 1 0; 6 -6 25; ]·[ 0; 5; 1; ]||_∞ =
||[ 0; 5; -5; ]||_∞ =
5 ≤ k
We can obtain different relative sizes of (k+1)(∑_i=1^n x_ib_i) and k”_1 by defining
k”_2 = 2·k· (k+1)· (∑_i |b_i|) +1
Now we can make sure that the last entry of a solution to the SVP problem is indeed zero.
For the proof of Theorem <ref> we consider the reduction given by
ℒ =
[ 1 0 0; ⋱ ⋮; 0 1 0; - (k+1)· b - k”_2; ]_B·ℤ^n
k=k
where B denotes the basis matrix generating the lattice ℒ as given above.
Consider a solution of the SVP with ‖ Bx‖_∞≤ k.
Then we have
Bx= [ 1 0 0; ⋱ ⋮; 0 1 0; - (k+1)· b - k”_2; ]·[ x_1; ⋮; x_n; x_n+1; ]
=
[ x_1; ⋮; x_n; (k+1)(∑_i=1^n x_i b_i) + x_n+1· k”_2; ]
As this yields a solution to the SVP, we get:
|(k+1)(∑_i=1^n x_i b_i) + x_n+1· k”_2|≤ k
Then we calculate:
(k+1)(∑_i=1^n x_i b_i) + x_n+1· k”_2
≤ (k+1)(∑_i=1^n |x_i| |b_i|) + x_n+1· k”_2 ≤
≤ (k+1)k(∑_i=1^n |b_i|) + x_n+1· k”_2
Assuming that x_n+1≠ 0, we have
|(k+1)k(∑_i=1^n |b_i|)| < |2· k · (k+1)· (∑_i |b_i|) +1 |
= |k”_2| ≤ |x_n+1· k”_2|
Thus the two summands indeed have different relative sizes and can never cancel out the other summand. This leads to a contradiction to (<ref>). Therefore, x_n+1=0 must be true and (x_1,…,x_n) constitutes a solution to the BHLE when using k”_2 as in (<ref>).
§ OTHER P-NORMS
Up to now, we have investigated lattice problems under the infinity norm. Even though this yields nice hardness results, in practice the Euclidean norm is used more often. Unfortunately, when considering p-norms things do not play out as nicely. In this section, we assume 1≤ p<∞ whenever we talk about a specific p.
For the CVP, there is a generalisation of the proof for every p-norm in <cit.> which we also formalized.
Let a_1,…,a_n,s be an instance of Subset Sum. The reduction function maps this instance to:
ℒ =
[ a_1 ⋯ a_n; 2 0; ⋱ ; 0 2; ]·ℤ^n
b =
[ s; 1; ⋮; 1; ] k=√(n)
Then the following theorem holds:
The above mapping is a reduction from the Subset Sum problem to the CVP in p-norm.
This implies that the CVP in p-norm is an NP-hard problem.
The outline to the proof is given in Section <ref> after Theorem <ref>. The important difference to the infinity norm is that the bound k scales with the dimension n of the lattice.
For the SVP, there is no known deterministic NP-hardness result in the Euclidean norm, or even any p-norm.
However, Ajtai <cit.> found an interesting alternative which is quite useful for the application in cryptography, namely randomized reductions using polynomial-time probabilistic reduction functions.
In cryptography, these results guarantee the hardness of “average” cases. That is, given an average instance according to a probability distribution, it will most likely be intractable.
§ TIME COMPLEXITY
As stated in Section <ref>, time complexity of the above reduction functions has not been formalized. However, we give a short explanation why all reduction functions are indeed in polynomial time.
Subset Sum to CVP:
The reduction function as given in equation (<ref>) creates (n+2)(n+1)+1 values using only memory access or one addition.
Therefore, the time complexity in this case is 𝒪(n^2).
Partition to BHLE:
In this case, the reduction function maps the input a of length n to b as defined in equation (<ref>).
The value k=1 is fixed. Then a is mapped to a vector of length 5n-1.
When calculating the b_i, we need to calculate the value of M as in (<ref>). As we sum over all input values, this lies in 𝒪(n).
Each b_i can then be calculated in 𝒪(n) since it only contains a constant number of additions of the input with fixed cofactors (see (<ref>) - (<ref>)).
Putting the construction of the list and the calculation of the b_i together, we find that the whole reduction function is in 𝒪(n^2).
BHLE to the SVP:
Consider the reduction function as given in equation (<ref>) using the value k”_2 as in (<ref>).
Calculating k”_2 requires n+2 memory accesses which are processed in n+4 arithmetic operations, thus having a time complexity of 𝒪(n).
Every other entry in the matrix is calculated on 𝒪(1), since they contain at most two memory accesses and at most two arithmetic operations.
The input generates (n+1)^2+1 values, of which (n+1)(n+1) are in 𝒪(1) (namely all the zeros and ones, the vector (k+1)· a and the constraint k) and one is calculated in 𝒪(n) (namely k”_2).
Thus, the whole reduction function lies in 𝒪(n^2).
§ OUTLOOK
With this paper, we now have a formal proof for NP-hardness of the CVP and SVP in the infinity norm, as well as a formal proof of the CVP in p-norm (for 1≤ p <∞). In the formalization process, many gaps and imprecisions in the pen-and-paper proofs were fixed. The changes to the original proofs have been elaborated with explanations and examples.
Unfortunately, giving a deterministic reduction proof of the SVP in p norm for p<∞ is still an open problem. Under probabilistic assumptions, Ajtai showed NP-hardness of the SVP in Euclidean norm in <cit.>.
An interesting topic for future work is to develop
a framework
for probabilistic reductions such as in <cit.>.
This will give the foundation to extend formalization of hardness proofs to other problems in lattice theory, especially those used in lattice-based cryptography, such as the Learning with Errors (LWE) Problem, Ring-LWE and Module-LWE.
This will underline the security of many lattice-based crypto systems.
Another topic for future work is to formalize the hardness proofs for approximate versions of the CVP and SVP.
§.§.§ Acknowledgements
We thank Manuel Eberl for continuous support and fruitful discussions. The first author gratefully acknowledges the financial
support of this work by the research training group ConVeY funded by the German Research Foundation under
grant GRK 2428.
splncs04
§ EXAMPLES OF LATTICES
In Figure <ref> two examples of lattices in ℝ^2 are depicted. The red point is the origin. The two blue arrows show the basis vectors a_1 and a_2 that are linearly independent and span the lattice. Every integer combination of the two blue arrows is a black point, an element of the lattice.
We can see that the grid spanned by the basis vectors is discrete and has some recurring structures. These structures are determined by the basis vectors: the angle between them and their length. In Figure <ref>, the angle between the two basis vectors is 90^∘ yielding a rectangular fundamental domain. Whereas in Figure <ref>, we have an angle of 60^∘ between the basis vectors and equal length. This produces a fundamental domain of an equilateral triangle.
Indeed, the automorphism group of a lattice is a symmetry group, see Conway <cit.>. For example, in Figure <ref> the symmetry group is pmm and in Figure <ref> is it p3m1<cit.>.
§ EXAMPLES OF INSTANCES OF THE CVP AND SVP
Figure <ref> shows a two-dimensional instance of the CVP in Euclidean norm. The green points form the lattice ℒ which is spanned by the two red vectors a_1 and a_2. The target vector is the red point labeled b. The estimate k is depicted as the radius of the blue circle around b.
In this case, we have a YES-instance, since there exists a lattice point close enough to the target vector (there is a green point in the blue circle around b). Indeed, the green dot in the blue circle is a solution point to the search problem associated to the CVP.
In Figure <ref>, an instance of the SVP in ℤ^2 in Euclidean norm is depicted. The lattice ℒ is drawn as the set of green points. It is generated by the two red vectors a_1 and a_2. The estimate k is the radius of the blue ball around the origin (annotated by (0,0)).
In this case, we have a YES-instance of the SVP. There are two points, namely s_1 and s_2 which are on the edge of the blue circle around the origin. As there are no other green points inside the blue circle apart from the origin, therefore s_1 and s_2 are indeed the shortest vectors of the lattice.
This is a nice example to see that there always exist at least two shortest vectors. The reason is very simple: Assume there is a shortest vector v, then also -v is a shortest vector, since ‖ v‖ = ‖ -v‖ in any norm. In our case, s_1 and s_2 both are possible solutions to the search problem of the SVP.
§ PROOFS FOR BHLE
§.§ Proof of “a ∈ Partition ⟹ b ∈ BHLE”
Let a_1,…,a_n be a YES-instance of Partition with partitioning set I. We will show that the following vector x is a solution to the corresponding BHLE:
x = (x_1,…,x_n-1,x_n)
x_i = pm i∈ I
mp otherwise 1 ≤ i < n
x_n = 1,-1,0,-1
pm = 1,-1,0,-1,0 n-1 ∈ I
0,0,-1,1,1 otherwise
mp = 0,0,-1,1,1 n-1 ∈ I
1,-1,0,-1,0 otherwise
We can now calculate the following:
b,x = ∑_i=1^5n-1 b_i · x_i =
= (∑_i=1^5(n-1) b_i · x_i ) + (b_n,1,b_n,2,b_n,4,b_n,5), (1,-1,0,-1) =
= (∑_i=1^n-1(b_i,1,b_i,2,b_i,4,b_i,5,b_i,3),
(𝑖𝑓 i∈ I 𝑡ℎ𝑒𝑛 pm 𝑒𝑙𝑠𝑒 mp) ) +
+ b_n,1-b_n,2-b_n,5 =
= (∑_i∈ I∩{1...n-1} b_i,1-b_i,2-b_i,5) + (∑_i∈{1… n-1}∖ I -b_i,4+b_i,5+b_i,3) +
+ b_n,1-b_n,2-b_n,5 =
= (∑_i∈ I∩{1...n-1} a_i + M · (5^4i-4 - 5^4i)) +
+ (∑_i∈{1...n-1}∖ I - a_i + M · (5^4i-4 - 5^4i)) + a_n + M · (5^4n-4 - 1 ) =
= (∑_i∈ I a_i) - (∑_i∈{1...n}∖ I a_i) +
+ M·(5^4n-4 - 1 + ∑_i∈{1...n-1} 5^4i-4 - 5^4i) =
= 0
For the last equality, we need two facts:
Firstly, since a is a YES-instance of Partition with partitioning set I, we have
∑_i∈ I a_i = ∑_i∈{1… n}∖ I a_i
Secondly, M is multiplied by a telescopic sum that reduces to zero.
As the entries of x are in {-1,0,1}, we have ‖ x‖_∞≤ 1.
All in all, x constitutes a solution for the BHLE instance given by b and 1.
§.§ Proof of “a ∈ Partition ⟸ b ∈ BHLE”
Let b be a YES-instance of BHLE. That is, there exists an x such that
b,x = 0 and ‖ x‖_∞≤ 1. Again, this step only works for the infinity norm. We will look more closely at this later in the proof.
The proof goal is to find a set I⊆{1… n} such that
∑_i∈ I a_i = ∑_i∈{1… n}∖ I a_i
Unfortunately, we do not know the exact values of x, so we need to derive equational constraints on x.
We have
0 = b,x = ∑_i∈{1… 5n-1} b_i · x_i =
= (∑_i∈{0… n-1} (x_5i + 1 + x_5i+3) · a_i+1) + M ·( ∑_i∈{1… 5n-1}x_i · c_i )
where c = (c_1,…, c_5n-1) is the appropriate rest consisting only of sums over powers of 5.
We observe that M was chosen in a manner such that
|∑_i∈{0… n-1} (x_5i+1 + x_5i+3) · a_i+1| < M
From Lemma <ref> we know that each digit has to be zero if the whole number equals to zero if the assumptions hold. Therefore, knowing (<ref>) and |x_i|≤ 1 by the assumptions that b is a YES-instance of BHLE, the following equations are derived immediately.
∑_i∈{0… n-1} (x_5i+1 + x_5i+3) · a_i+1 = 0
∑_i∈{1… 5n-1}x_i · c_i = 0
Since every summand in (<ref>) consists of a power of 5 times an element of x, we can rewrite this sum as a number in basis 5 by accumulating all coefficients to a power of 5. We denote the digits by a function d of the index.
0 = ∑_i∈{1… 5n-1}x_i · c_i = ∑_k∈{0… 4n-1}d(k) · 5^k
Again, applying Lemma <ref> consecutively with |d(k)| <5 (we split off the lowest digit in the representation in basis 5) yields that every digit d(k) equals zero for k<4n.
This yields the following equations:
∀ i∈ {1… n-1} :
x_5i+1 + (𝑖𝑓 i<n-1 𝑡ℎ𝑒𝑛 x_5i+5 𝑒𝑙𝑠𝑒 0)+ x_5(i-1)+2 + x_5(i-1)+3 = 0
x_1 + (𝑖𝑓 1<n 𝑡ℎ𝑒𝑛 x_5 𝑒𝑙𝑠𝑒 0) + x_5(n-1)+2 + x_5(n-1)+3 = 0
∀ i∈{ 0… n-1} : x_5i+1 + x_5i+2 = 0
∀ i∈ { 0… n-1 } : (𝑖𝑓 i<n-1 𝑡ℎ𝑒𝑛 x_5i+5 𝑒𝑙𝑠𝑒 0) + x_5i+3 = 0
∀ i∈{ 0… n-1} : x_5i+1 + x_5i+3 + x_5i+4 = 0
From these equations, we can derive that the value x_5i+1 + x_5i+5 does not depend on i (for i<n-1). We call this value the weight w where
w = x_5i+1 + x_5i+5
The observant reader notices that the definition of the weight is the reason we needed to omit the last element in the vector b. Indeed, the element x_5(n-1)+5 is not defined and for i=n-1 the weight is only
w = x_5(n-1)+1
This constrains the bound on the absolute value |w|≤ 1, since ‖ x‖_∞≤ 1.
It is essential to constrain the weight to |w|≤ 1, since otherwise we cannot deduce a partition. Assume |w| = 2, then also a solution with x_5i+1 + x_5i+3 = x_5i+1 - x_5i+5 = 0, i.e. x_5i+1 = x_5i+5 = 1 is allowed. Then, (<ref>) does not yield a partition as it is an empty sum.
Since we work over the integers, we only need to consider the values w∈{-1,0,1}.
Here, the solution w=0 leads to x=0, a contradiction to the assumptions that x is a solution to the BHLE instance b.
Thus, we will only look at the case of w=1. The case w=-1 proceeds analogous with flipped signs.
Using the above equations, we can conclude that either x_5i+1=1 ∧ x_5i+5=0 or x_5i+1=0 ∧ x_5i+5=1.
Then, (<ref>) yields the desired partition for the YES-instance of the Partition Problem.
This concludes the proof.
§ DIFFERENT NOTATIONS
|
http://arxiv.org/abs/2306.17828v1
|
20230630174819
|
Understanding Unfairness via Training Concept Influence
|
[
"Yuanshun Yao",
"Yang Liu"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CY"
] |
media/
|
http://arxiv.org/abs/2306.02869v1
|
20230605134334
|
Data-Driven Regret Balancing for Online Model Selection in Bandits
|
[
"Aldo Pacchiano",
"Christoph Dann",
"Claudio Gentile"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] |
On “Scientific Debt” in NLP: A Case for More Rigour
in Language Model Pre-Training Research
Made Nindyatama Nityasya^1, Haryo Akbarianto Wibowo^1, Alham Fikri Aji^2,
Genta Indra Winata^3, Radityo Eko Prasojo^4, Phil Blunsom^5,6, Adhiguna Kuncoro^7
^1Independent Researcher ^2MBZUAI ^3Bloomberg ^4Universitas Indonesia
^5Cohere.AI ^6University of Oxford ^7DeepMind
July 31, 2023
===================================================================================================================================================================================================================================================================================================================
We consider model selection for sequential decision making in stochastic environments with bandit feedback, where a meta-learner has at its disposal a pool of base learners, and decides on the fly which action to take based on the policies recommended by each base learner. Model selection is performed by regret balancing but, unlike the recent literature on this subject, we do not assume any prior knowledge about the base learners like candidate regret guarantees; instead, we uncover these quantities in a data-driven manner. The meta-learner is therefore able to leverage the realized regret incurred by each base learner for the learning environment at hand (as opposed to the expected regret), and single out the best such regret.
We design two model selection algorithms operating with this more ambitious notion of regret and, besides proving model selection guarantees via regret balancing, we experimentally demonstrate the compelling practical benefits of dealing with actual regrets instead of candidate regret bounds.
§ INTRODUCTION
In online model selection for sequential decision making, the learner has access to a set of base learners and the goal is to adapt during learning to the best base learner that is the most suitable for the current environment. The set of base learners typically comes from instantiating different modelling assumptions or hyper-parameter choices, e.g., complexity of the reward model or the ϵ-parameter in ϵ-greedy. Which choice, and therefore which base learner, works best is highly dependent on the problem instance at hand, so that good online model selection solutions are important for robust sequential decision making. This has motivated an extensive study of model selection questions <cit.> in bandit and reinforcement learning problems.
While some of these works have developed custom solution for specific model selection settings, for instance, selecting among a nested set of linear policy classes in contextual bandits (e.g., <cit.>), the relevant literature also provides several general purpose approaches that work in a wide range of online model selection settings. Among the most prominent ones are FTRL-based (follow-the-regularized-leader) algorithms, including EXP4 <cit.>, Corral <cit.> and Tsallis-INF <cit.>, as well as algorithms based on regret balancing <cit.>.
These methods usually come with theoretical guarantees of the following form: the expected regret (or high-probability regret) of the model selection algorithm is not much worse than the expected regret (or high probability regret) of the best base learner. Such results are reasonable and known to be unimprovable in the worst-case <cit.>. Yet, it is possible for model selection to achieve expected regret that is systematically smaller than that of any base learner.
This may seem surprising at first, but it can be explained through an example when considering the large variability across individual runs of each base learner on the same environment.
The situation is illustrated in fig:expected_regret_motivation. On the left, we plot the cumulative expected regret of two base learners, along with the corresponding behavior of one of our model selection algorithms (ED^2RB – see sec:ED2RB below) run on top of them. On the right, we unpack the cumulative expected regret curve of one of the two base learners from the left plot, and display ten independent runs of this base learner on the same environment, together with the resulting expected regret curve (first 1000 rounds only).
Since the model selection algorithm has access to two base learners simultaneously, it can leverage a good run of either of two, and thereby achieve a good run more likely than any base learner individually, leading to overall smaller expected regret.
Such high variability in performance across individual runs of a base learner is indeed fairly common in model selection, for instance when base learners correspond to different hyper-parameters that control the explore-exploit trade-off. For a hyper-parameter setting that explores too little for the given environment, the base learner becomes unreliable and either is lucky and converges quickly to the optimal solution or unlucky and gets stuck in a suboptimal one.
This phenomenon is a key motivation for our work. Instead of model selection methods that merely compete with the expected regret of any base learner, we design model selection solutions that compete with the regret realizations of any base learner, and have (data-dependent) theoretical guarantees that validate this ability.
While the analysis of FTRL-based model selection algorithms naturally lends itself to work with expected regret (e.g., <cit.>), the existing guarantees for regret balancing work with realized regret of base learners (e.g., <cit.>). Concretely, regret balancing requires each learner to be associated with a candidate regret bound, and the model selection algorithm competes with the regret bound of the best among the well-specified learner, those learners whose regret realization is below their candidate bound. Setting a-priori tight candidate regret bounds for base learners is a main limitation for existing regret balancing methods, as the resolution of these bounds is often the one provided by a (typically coarse) theoretical analysis.
As suggested in earlier work, we can create several copies of each base learner with different candidate bounds, but we find this not to perform well in practice due to the high number of resulting base learners. Another point of criticism for existing regret balancing methods is that, up to deactivation of base learners, these methods do not adapt to observations, since their choice among active base learners is determined solely by the candidate regret bounds themselves, which are set a-priori.
In this work, we address both these limitations, and propose two new regret balancing algorithms for model selection with bandit feedback that do not require knowing candidate regret bounds. Instead, the algorithms determine the right regret bounds sequentially in a data-driven manner, allowing them to adapt to the regret realization of the best base learner. We prove this by deriving regret guarantees that share the same form with existing results, but replace expected regret rates or well-specified regret bounds with realized regret rates, which can be much sharper (as in the example in fig:expected_regret_motivation).
From a theoretical standpoint, our work has to be contrasted with existing results where the model selection algorithm is provided with a set of candidate regret bounds for each of the base learners. As we said, our work removes this assumption and yields data-dependent model selection regret bounds. This is in contrast with existing black-box approaches such as Corral <cit.> and Regret Bound Balancing <cit.>.
From an empirical standpoint, we illustrate the validity of our approach by carrying out an experimental comparison with competing approaches to model selection via base learner pooling, and find that our new algorithms systematically outperform the tested baselines.
§ SETUP AND NOTATION
We consider a general sequential decision making framework that covers many important problem classes such as multi-armed bandits, contextual bandits and tabular reinforcement learning as special cases.
This framework or variations of it has been commonly used in the model selection literature <cit.>.
The learner operates with a policy class Π and a set of contexts over which is defined a probability distribution , unknown to the learner.
In bandit settings, each policy π is a mapping from contexts to Δ_𝒜, where 𝒜 is an action space and Δ_𝒜 denotes the set of probability distributions over 𝒜. However, the concrete form of Π, or is not relevant for our purposes.
We only need that each policy π∈Π is associated with a fixed expected reward mapping μ^π→ [0, 1] of the form μ^π(x) = [r | x, π],
which is unknown to the learner.
In each round t ∈ of the sequential decision process, the learner first decides on a policy π_t ∈Π. The environment then draws a context x_t ∼
as well as a reward observation r_t ∈ [0, 1] such that
[r_t | x_t, π_t] = μ^π_t(x_t). The learner receives (x_t, r_t) before the next round starts.
We call v^π = _x ∼[μ^π(x)] the value of a policy π∈Π and define the instantaneous regret of π as
(π) = v^⋆ - v^π = _x ∼[μ^π_⋆(x) - μ^π(x)]
where π_⋆∈_π∈Π v^π is an optimal policy and v^⋆ its value. The total regret after T rounds of an algorithm that chooses policies π_1, π_2, … is
(T) = ∑_t=1^T (π_t).
Note that (T) is a random quantity since the policies π_t selected by the algorithm depend on past observations, which are themselves random variables. Yet, we use in (<ref>) a pseudo-regret notion that takes expectation over reward realizations and context draws. This is most convenient for our purposes but we can achieve guarantees without those expectations by paying an additive O(√(T)) term, as is standard. We also denote by u_T = ∑_t=1^T v^π_t the total value accumulated by the algorithm over the T rounds.
Base learners.
The learner (henceforth called meta-learner) is in turn given access to M base learners that the meta-learner can consult when determining the current policy to deploy. Specifically, in each round t, the meta-learner chooses one base learner i_t ∈ [M] = {1,…, M} to follow and plays the policy suggested by this base learner. The policy that base learner i recommends in round t is denoted by π^i_t and thus π_t = π^i_t_t.
We shall assume that each base learner has an internal state (and internal clock) that gets updated only on the rounds where that base learner is chosen. After being selected in round t, base learner i_t will receive from the meta-learner the observation (x_t,r_t).
We use n^i_t = ∑_ℓ = 1^ti_t = i to denote the number of times base learner i happens to be chosen up to round t, and by
u_t^i = ∑_ℓ = 1^ti_t = i v^π^i_t the total value accumulated by base learner i up to this point.
It is sometimes more convenient to use a base learner's internal clock instead of the total round index t. To do so, we will use subscripts (k) with parentheses to denote the internal time index of a specific base learner, while subscripts t refer to global round indices. For example, given the sequence of realizatons (x_1,r_1), (x_2,r_2), …, π^i_(k) is the policy base learner i wants to play when being chosen the k-th time,
i.e., π^i_t = π^i_(n^i_t).
The total regret incurred by a meta-learner that picks base learners i_1,…, i_T can then be decomposed into the sum of regrets incurred by each base learner:
(T) = ∑_t=1^T (π_t) = ∑_i = 1^M ∑_k = 1^n^i_T(π^i_(k)).
§.§ Data-Driven Model Selection
Our goal is to perform model selection in this setting: We devise sequential decision making algorithms
that have access to base learners as subroutines and are guaranteed to have regret that is comparable
to the smallest realized regret, among all base learners in the pool, despite not knowing a-priori which base learner will happen to be best for the environment at hand ( and μ^π), and the actual realizations (x_1,r_1), (x_2,r_2), …, (x_T,r_T).
In order to better quantify this notion of realized regret, the following definition will come handy.
The regret scale of base learner i after being played k rounds is ∑_ℓ=1^k(π_(ℓ)^i)/√(k).
For a positive constant d_min, the regret coefficient of base learner i after being played k rounds is defined as
d^i_(k) = max {∑_ℓ=1^k(π_(ℓ)^i)/√(k), d_min}.
That is, d^i_(k)≥ d_min is the smallest number such that the incurred regret is bounded as ∑_ℓ=1^k(π^i_(ℓ)) ≤ d^i_(k)√(k).
Further we define the monotonic regret coefficient of base learner i after being played k rounds as
d̅^i_(k) = max_ℓ∈ [k] d^i_(ℓ).
We use a √(k) rate in this definition since that is the most commonly targeted regret rate in stochastic settings. Our results can be adapted, similarly to prior work <cit.> to other rates but the √(T) barrier for model selection <cit.> remains of course.
It is worth emphasizing that both d^i_(k) and d̅^i_(k) in the def:regretcoeff are random variables depending on (x_1,r_1), (x_2,r_2), …, (x_ℓ,r_ℓ), where ℓ = min{t : n^i_t = k}. We illustrate them in fig:regret_coefficients.
§.§ Running Examples
The above formalization encompasses a number of well-known online learning frameworks, including finite horizon Markov decision processes and contextual bandits, and model selection questions therein. We now introduce two examples but refer to earlier works on model selection for a more exhaustive list <cit.>.
Tuning UCB exploration coefficient in multi-armed-bandits. As a simple illustrative example, we consider multi-armed bandits where the learner chooses in each round an action a_t from a finite action set and receives a reward r_t drawn from a distribution with mean μ^a_t and unknown but bounded variance σ^2. In this setting, we directly identify each policy with an action, i.e., Π = and define the context = {∅} as empty. The value of an action / policy a is simply v^a = μ^a.
The variance σ strongly affects the amount of exploration necessary, thereby controlling the difficulty or “complexity” of the learning task. Since the explore-exploit of a learner is typically controlled through a hyper-parameter, it is beneficial to perform model selection among base learners with different trade-offs to adapt to the right complexity of the environment at hand.
We use a simple UCB strategy as a base learner that chooses the next action as _a ∈μ̂(a) + c √(ln(n(a) / δ)/n(a)) where n(a) and μ̂(a) are the number of pulls of arm a so far and the average reward observed. Here c is the confidence scaling and we instantiate different base learners i ∈ [M] with different choices c_1, …, c_M for c. The goal is to adapt to the best confidence scaling c_i_⋆, without knowing the true variance σ^2.[We choose this example for its simplicity. An alternative without model selection would be UCB with empirical Bernstein confidence bounds <cit.>. However, adaptation with model selection works just as well in more complex settings e.g. linear bandits and MDP, where empirical variance confidence bounds are not available or much more complicated.]
Nested linear bandits. In the stochastic linear bandit model, the learner chooses an action a_t ∈ from a large but finite action set 𝒜⊂ℝ^d, for some dimension d>0 and receives as reward r_t = a_t^⊤ω + white noise, where ω∈ℝ^d is a fixed but unknown reward vector.
This fits in our framework by considering policies of the form π_θ(x) = _a ∈⟨ a, θ⟩ for a parameter θ∈^d, defining contexts = {∅} as empty and the mean reward as μ^π(x) = π(x)^⊤ω, which is also the value v^π.
We here consider the following model selection problem, that was also a motivating application in <cit.>. The action set 𝒜⊂ℝ^d_M has some maximal dimension d_M>0, and we have an increasing sequence of M dimensions d^1 < … < d^M. Associated with each d^i is a base learner that only considers policies Π_i of the form π_θ_i(x) = _a ∈⟨ P_d_i[a], θ_i ⟩ for θ_i ∈^d^i and P_d^i[·] being the projection onto the first d^i dimensions. That is, the i-th base learner operates only on the first d^i components of the unknown reward vector ω∈ℝ^d^M. If we stipulate that only the first d^i_⋆ dimensions of ω∈^d^M are non-zero (d^i_⋆ being unknown to the learner) we are in fact competing in a regret sense against the base learner that operates with the policy class Π_i_⋆, the one at the “right" level of complexity for the underlying ω.
Nested stochastic linear contextual bandits. We also consider a contextual version of the previous setting <cit.> where where context x_t ∈ are drawn i.i.d. and which a policy maps to some action a_t ∈. The expected reward is then μ^π(x) = ψ(x,π(x))^⊤ω for a known feature embedding ψ : ×→ℛ^d, and an unknown vector ω∈ℛ^d. Just as above, we consider the nested version of this setting where ψ and ω live in a large ambient dimension d^M but only the first d^i_⋆ entries of ω are non-zero.
§ DATA-DRIVEN REGRET BALANCING
We introduce and analyze two data-driven regret balancing algorithms.
§.§ Data-Driven Regret Balancing Through Doubling
We present our first meta-algorithm (Doubling Data Driven Regret Balancing (D^3RB)) in alg:doublebalancing, which serves as a warm up for our slightly more involved second meta-algorithm.
D^3RB maintains over time three main estimators: (1) regret coefficients d^i_t, meant to estimate the monotonic regret coefficients d̅^i_t from def:regretcoeff, (2) the average reward estimators u^i_t/n^i_t, and (3) the balancing potentials ϕ^i_t, which are instrumental in the implementation of the exploration strategy based on regret balancing (other instances of model selection via regret balancing can be found in earlier papers, e.g., <cit.>).
At each round t the meta-algorihm picks the base learner i_t with the smallest balancing potential so far (ties broken arbitrarily). The algorithm plays the policy π_t suggested by that base learner on the current context x_t, receives the associated reward r_t, and forwards (x_t,r_t) back to that base learner only. Then D^3RB performs a misspecification test, meant to see if the current estimate of the regret of base learner i_t is compatible with the data collected so far. If that is not the case (the test “triggers") the regret coefficient d^i_t is doubled, and the balancing potential of base learner i_t is updated.
The following result quantifies the regret properties of D^3RB in terms of the monotonic regret coefficients of the base learners at hand.
theoremmaindouble
With probability at least 1 - δ, the regret of alg:doublebalancing with parameters δ and d_min≥ 1 is bounded in all rounds T ∈ as[
Here and throughout, Õ hides log-factors.
]
(T) = Õ( d̅^⋆_T M√(T) + (d̅^⋆_T)^2 √(MT))
where d̅^⋆_T = min_i ∈ [M]d̅^i_T = min_i ∈ [M]max_t ∈ [T] d^i_t is the smallest monotonic regret coefficient among all learners.
One way to interpret thm:maindouble is the following. If the meta-learner were given ahead of time the index of the base learner achieving the smallest monotonic regret coefficient d̅^⋆_T, then the meta-learner would follow that base learner from beginning to end. The resulting regret bound for the meta-learner would be of the form[
Yet, see thm:mainestimate, where d̅^⋆_T is replaced by the smaller d^⋆_T.
]
(d̅^⋆_T)√(T).
Then the price D^3RB pays for aggregating the M base learners is essentially a multiplicative factor of the form M + d̅^⋆_T √(M).
§.§ Data-Driven Regret Balancing Through Estimation
A more refined version of D^3RB is the ED^2RB algorithm (Estimating Data-Driven Regret Balancing), contained in alg:estimatebalancing. The main difference compared to D^3RB is that ED^2RB replaces the misspecification-test-plus-doubling operation with a more refined data-dependent estimation d^i_t of the regret coefficients, coupled with a slightly more careful definition of the balancing potentials ϕ^i_t deployed for regret balancing. The function clip(x; a,b) therein clips the real argument x to the interval [a,b]. This more careful definition allows us to replace in the regret bound the monotonic regret coefficient d̅^⋆_T with the sharper regret coefficient d^⋆_T:
theoremmainestimate
With probability at least 1 - δ, the regret of alg:estimatebalancing with parameters δ and d_min≥ 1 is bounded in all rounds T ∈ as
(T) = Õ(d^⋆_T M√(T) + (d^⋆_T)^2 √(MT))
where d^⋆_T = min_i ∈ [M]max_j ∈ [M]d̅^i_T_j is the smallest regret coefficient among all learners, and T_j is the last time t when base learner j was played and ϕ^j_t+1 < 2ϕ^j_t.
Up to the difference between d^⋆_T and d̅^⋆_T, the guarantees in thm:maindouble and thm:mainestimate are identical. Further, since d^⋆_T ≤d̅^⋆_T, the guarantee for ED^2RB is never worse than that for D^3RB. It can however be sharper, e.g., in environments with favorable gaps where we expect that a good base learner may achieve a O(log(T)) regret instead of a √(T) rate and thus d^i_t of that learner would decrease with time. The regret coefficient d^⋆_T can benefit from this while d̅^⋆_T cannot decrease with T, and thus provide a worse guarantee.
Importantly, both our data-dependent guarantees recover existing data-independent model-selection results up to the precise M dependency. Specifically, ignoring M factors, our bounds scale at most as (d̅^⋆_T)^2 √(T) while the previous literature on the subject (e.g., <cit.>, Corollary 2) scales as (d^i_⋆)^2√(T). In the case of existing regret balancing, d^i_⋆√(T) is the best well-specified regret bound. We always have d^i_⋆≥d̅^⋆_T but, as mentioned earlier, the regret bound has to be specified ahead of time which typically is informed by expected or high-probability regret guarantees of the base learners. These therefore do not leverage the favorable cases that our data-dependent bounds automatically adapt to. Similarly for FTRL algorithms <cit.>, d^i_⋆ is the expected regret scale and thus also never sharper than our d̅^⋆_T and not capturing favorable realizations. As we will see in the experimental evaluation in the following section, there is often a stark difference between the expected performance and the data-dependent performance which confirms that the improvement in our bounds is important in practice.
Proof technique. The proofs for both our regret bounds can be found in app:doubling_proofs and <ref>, respectively.
We built on the existing technique for analyzing regret balancing b <cit.>. However, this analysis heavily relies on fixed candidate regret bounds and removing those introduced several technical challenges. To overcome them, we had to disentangle the balancing potentials ϕ^i_t from the estimated regret coefficients d^i_t and combine it with clipping or the doubling estimator. This allowed us to show the necessary monotonicity properties and generalized balancing conditions that enable our improved data-dependent bounds.
§ EXPERIMENTS
We evaluate our algorithms on several different synthetic benchmarks (environments, base-learners and model selection tasks), and compare their performance against existing meta-learners. For all details of the experimental setup and additional results, see app:experimental_details.
Environments and base-learners: As the first environment, we use a simple 5-armed multi-armed bandit problem (MAB) with standard Gaussian noise. We then use two linear bandit settings, as also described in sec:running_example: linear bandits with stochastic rewards, either with a stochastic context (CLB) or without (LB). As base learners, we
use UCB for the MAB environment (see also sec:running_example) and Linear Thompson (LinTS) sampling <cit.> for the LB and CLB setting.
l.5
< g r a p h i c s >
Experiment 2.
Model selection task: We consider 3 different model selection tasks. In the first, conf (“confidence"), we vary the explore-exploit trade-off in the base learners. For UCB, different base learners correspond to different settings of c, the confidence scaling that multiplies the exploration bonus (fig:confidence_MAB). Analogously, for LinTS, we vary the scale c of the parameter perturbation (see fig:confidence_linear).
For the second task dim (“dimension"), we vary the number of dimensions d_i the base learner considers when choosing the action (see second and third example in sec:running_example, as well as fig:nested_linear for results). Finally, we also consider a “self” task (fig:expected_regret_sample_runs), where all base learners are copies of the same algorithm.
Meta-learners: We evaluate both our algorithms, D^3RB from alg:doublebalancing and ED^2RB from alg:estimatebalancing. We compare them against the Corral algorithm <cit.> with the stochastic wrapper from <cit.>, as a representative for FTRL-based meta-learners. We also evaluate Regret Balanncing from <cit.> with several copies of each base learner, each with a different candidate regret bound, selected on an exponential grid (RB Grid). We also include in our list of competitors three popular algorithms, the Greedy algorithm (always selecting the best base learner so far with no exploration), UCB <cit.> and EXP3 <cit.>. These are legitimate choices as meta-algorithms, but either they do not come with theoretical guarantees in the model selection setting (UCB, Greedy) or enjoy worse guarantees <cit.>.
Discussion. An overview of our results can be found in tab:general_overview, where we report the cumulative regret of each algorithm at the end of each experiment. fig:confidence_MAB–<ref> contain the entire learning curves (as regret scale = cumulative regret normalized by √(T)).
We observe that D^3RB and ED^2RB both outperform all other meta-learners on all but the second benchmark. UCB as a meta-learner performs surprisingly well in benchmarks on MABs but performs poorly on the others.
Thus, our methods feature the smallest or close to the smallest cumulative regret among meta-learners on all benchmarks.
Comparing D^3RB and ED^2RB, we observe overall very similar performance, suggesting that ED^2RB may be preferable due to its sharper theoretical guarantee. While the model selection tasks conf and dim are standard in the literature, we also included one experiment with the self task where we simply select among different instances of the same base learner. This task was motivated by our initial observation (see also fig:expected_regret_motivation) that base learners have often a very high variability between runs and that model selection can capitalize on. Indeed, fig:expected_regret_sample_runs shows that
our algorithms as well as UCB achieve much smaller overall regret than the base learner. This suggests that model selection can be an effective way to turn a notoriously unreliable algorithm like the base greedy base learner (UCB with c=0 is Greedy) into a robust learner.
§ CONCLUSIONS AND LIMITATIONS
We proposed two new algorithms for model selection based on the regret balancing principle but without the need to specify candidate regret bounds a-priori. This calls for more sophisticated regret balancing mechanics that makes our methods data-driven and as an important benefit allows them to capitalize on variability in a base learner's performance. We demonstrate this empirically, showing that our methods perform well across several synthetic benchmarks, as well as theoretically. We prove that both our algorithms achieve regret that is not much worse than the realized regret of any base learner. This data-dependent guarantee recovers existing data-independent results but can be significantly tighter.
In this work, we focused on the fully stochastic setting, with contexts and rewards drawn i.i.d. We believe an extension of our results to arbitrary contexts is fairly easy by replacing the deterministic balancing with a randomized version. In contrast, covering the fully adversarial setting is likely possible by building on top of <cit.> but requires substantial innovation.
abbrvnat
§ APPENDIX
The appendix contains the extra material that was omitted from the main body of the paper.
§ DETAILS ON FIG:EXPECTED_REGRET_MOTIVATION
We consider a 5-armed bandit problem with rewards drawn from a Gaussian distribution with standard deviation 6 and mean 10/10, 6/10, 5/10, 2/10, 1/10 for each arm respectively.
We use a simple UCB strategy as a base learner that chooses the next action as _a ∈μ̂(a) + c √(ln(n(a) / δ)/n(a)) where n(a) and μ̂(a) are the number of pulls of arm a so far and the average reward observed.
The base learners use δ = 1/10 and c = 3 or c = 4 respectively.
§ ANALYSIS COMMON TO BOTH ALGORITHMS
We define the event in which we analyze both algorithms as the event in which for all rounds t ∈ and base learners i ∈ [M] the following inequalities hold
- c √(n^i_t ln M ln n^i_t/δ)≤u^i_t - u^i_t ≤ c √(n^i_t ln M ln n^i_t/δ)
for the algorithm parameter δ∈ (0, 1) and a universal constant c > 0.
Event from def:evente has probability at least 1 - δ.
Consider a fixed i ∈ [M] and t and write
u^i_t - u^i_t
= ∑_ℓ = 1^t i_ℓ = i(r_t - v^π_t)
= ∑_ℓ = 1^t i_ℓ = i(r_ℓ - 𝔼[r_ℓ | π_ℓ] )
Let _t be the sigma-field induced by all variables up to round t before the reward is revealed, i.e., _t = σ( {x_ℓ, π_ℓ, i_ℓ}_ℓ∈ [t-1]∪{x_t, π_t, t_t}).
Then, X_ℓ = i_ℓ = i(r_t - 𝔼[r_t | π_t] ) ∈ [-1, +1] is a martingale-difference sequence w.r.t. _ℓ. We will now apply a Hoeffding-style uniform concentration bound from <cit.>.
Using the terminology and definition in this article, by case Hoeffding I in Table 4, the process S_t = ∑_ℓ=1^t X_ℓ is sub-ψ_N with variance process V_t = ∑_ℓ=1^t i_ℓ = i/ 4.
Thus by using the boundary choice in Equation (11) of <cit.>, we get
S_t ≤ 1.7 √(V_t ( lnln(2 V_t) + 0.72 ln(5.2 / δ) ) ) = 0.85√(n^i_t( lnln(n^i_t/2) + 0.72 ln(5.2 / δ)))
for all k where V_k ≥ 1 with probability at least 1 - δ.
Applying the same argument to -S_k gives that
| u^i_t - u^i_t |
≤ 3 ∨ 0.85√(n^i_t( lnln(n^i_t /2) + 0.72 ln(10.4 / δ)))
holds with probability at least 1 - δ for all t.
We now take a union bound over i ∈ [M] and rebind δ→δ / M. Then picking the absolute constant c sufficiently large gives the desired statement.
For each i ∈ [M], let F_i: ℕ∪{0}→ℝ_+ be a nondecreasing potential function that does not increase too quickly, i.e.,
F_i(ℓ) ≤ F_i(ℓ+1) ≤α· F_i(ℓ) ∀ℓ∈∪{0}
and that 0<F_i(0) ≤α· F_j(0) for all i, j ∈ [M]^2.
Consider a sequence (i_t)_t ∈ such that i_t = _i ∈ [M] F_i(n^i_t-1) and n^i_t = ∑_ℓ = 1^t1{i_ℓ = i}, i.e., i_t ∈ [M] is always chosen as the smallest current potential. Then, for all t ∈
max_i ∈ [M] F_i(n^i_t) ≤α·min_j ∈ [M] F_j(n^j_t).
Our proof works by induction over t. At t = 1, we have n^i_0 = 0 for all i ∈ [M] and thus, by assumption, the statement holds. Assume now the statement holds for t.
Notice that since n^i_t and F_i are non-decreasing, we have for all i ∈ [M]
min_i F_i(n^i_t) ≥min_i F_i(n^i_t-1)).
Further, for all i ≠ i_t that were not chosen in round t, we even have F_i(n^i_t-1) = F_i(n^i_t) for all i ≠ i_t.
We now distinguish two cases:
Case i_t ∉_i F_i(n^i_t-1). Since the potential of all i ≠ i_t that attain the max is unchanged, we have
max_i F_i(n^i_t) = max_i F_i(n^i_t-1)
and therefore max_i F_i(n^i_t)/min_j F_j(n^j_t)≤max_i F_i(n^i_t-1)/min_j F_j(n^j_t-1)≤α.
Case i_t ∈_i F_i(n^i_t-1).
Since i_t attains both the maximum and the minimum, and hence all potentials are identical, we have
max_i F_i(n^i_t) = F_i_t(n^i_t_t) ≤ F_i_t(n^i_t_t-1 + 1) ≤α F_i_t(n^i_t_t-1) = αmin_j F_j(n^j_t-1).
§ PROOFS FOR THE DOUBLING ALGORITHM (ALGORITHM <REF>)
In event , for each base learner i all rounds t ∈, the regret multiplier d^i_t satisfies
d^i_t ≤ 2 d̅^i_t .
Note that instead of showing this for all rounds t, we can also show this equivalently for all number k of plays of base learner i.
If the statement is violated for base learner i, then there is a minimum number k of plays at which this statement is violated.
Note that by definition d̅_(0)^i = d_min and by initialization d^i_(0) = d_min, hence this k cannot be 0.
Consider now the round t where the learner i was played the k-th time, i.e., the first round at which the statement was violated.
This means d^i_t > 2 d̅^i_t but d^i_t-1≤2̅ d^i_t-1 still holds. Since d^i_t can be at most 2d^i_t-1, we have
d^i_t-1 > d̅^i_t. We will now show that in this case, the misspecification test could not have triggered and therefore d^i_t = d^i_t-1≤ 2 d̅^i_t-1≤ 2 d̅^i_t which is a contradiction. To show that the test cannot trigger, consider the LHS of the test condition and bound it from below as
u^i_t_t/n_t^i_t + d^i_t_t-1√(n^i_t_t)/n_t^i_t + c√(lnMln n^i_t_t/δ/n_t^i_t) ≥u^i_t_t/n_t^i_t + d^i_t_t-1√(n^i_t_t)/n_t^i_tEvent
≥u^i_t_t/n_t^i_t + d̅^i_t_t√(n^i_t_t)/n_t^i_td^i_t_t-1 > d̅^i_t_t
≥u^i_t_t + ∑_ℓ=1^n_t^i_t(π^i_t_(ℓ))/n_t^i_tdefinition of d^i_t
≥ v^⋆definition of regret
≥u^j_t/n_t^jdefinition of v^⋆
≥u^j_t/n_t^j - c √(lnMln n^j_t/δ/n_t^j).
Event
This holds for any j ∈ [M] and thus, the test does not trigger.
In event , for each base learner i all rounds t ∈, the number of times the regret multiplier d^i_t has doubled so far is bounded as follows:
d^i_t ≤ 1 + log_2 d̅^i_t/d_min .
The potentials in alg:doublebalancing are balanced at all times up to a factor 3, that is,
ϕ^i_t≤ 3 ϕ^j_t for all rounds t ∈ and base learners i, j ∈ [M].
We will show that lem:balancing_abstract_lemma with α = 3 holds when we apply the lemma to F_i(n^i_t-1) = ϕ^i_t.
First F_i(0) = ϕ^i_1 = d_min for all i ∈ [M] and, thus, the initial condition holds. To show the remaining condition, it suffices to show that ϕ^i_t is non-decreasing in t and cannot increase more than a factor of 3 per round.
If i was not played in round t, then ϕ^i_t = ϕ^i_t-1 and both conditions holds.
If i was played, i.e., i = i_t, then
ϕ^i_t = d^i_t √(n^i_t)≤ 2d^i_t-1√(n^i_t)≤
2d^i_t-1√(n^i_t - 1)√(n^i_t/n^i_t - 1) = 2 ϕ^i_t-1√(n^i_t/n^i_t - 1)≤ 3 ϕ^i_t-1 if n^i_t > 1
2d_min√(1) = ϕ_t-1^i if n^i_t = 1
In event , the regret of all base learners i is bounded in all rounds T as
∑_k = 1^n^i_T(π^i_(k)) ≤6(d̅^j_T)^2/d_min√(n^i_T)
+ 6d̅_T^j √(n_T^j)
+ (6 cd̅^j_T/d_min + 2c)√(n_T^i lnMln T/δ)+ 1 + log_2 d̅^i_T/d_min ,
where j ∈ [M] is an arbitrary base learner with n^j_T > 0.
Consider a fixed base learner i and time horizon T, and let t ≤ T be the last round where i was played but the misspecification test did not trigger. If no such round exists, then set t = 0. By corr:num_doublings, i can be played at most 1 + log_2 d̅^i_T/d_min times between t and T and thus
∑_k = 1^n^i_T(π^i_(k)) ≤∑_k = 1^n^i_t(π^i_(k)) + 1 + log_2 d̅^i_T/d_min.
If t = 0, then the desired statement holds. Thus, it remains to bound the first term in the RHS above when t > 0. Since i = i_t and the test did not trigger we have, for any base learner j with n^j_t > 0,
∑_k = 1^n^i_t(π^i_(k)) = n^i_t v^⋆ - u^i_t definition of regret
= n^i_t v^⋆ - n^i_t/n^j_tu^j_t + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( n^j_t v^⋆ - u^j_t) + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( ∑_k = 1^n^j_t(π^j_(k))) + n^i_t/n^j_tu^j_t - u^i_t definition of regret
≤n^i_t/n^j_t(d^j_t √(n^j_t)) + n^i_t/n^j_tu^j_t - u^i_t definition of regret rate
≤√(n^i_t/n^j_t) d^j_t √(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
We now use the balancing condition in lem:doubling_balanced to bound the first factor √(n^i_t / n^j_t). This condition gives that ϕ^i_t+1≤ 3ϕ^j_t+1. Since both n^j_t > 0 and n^i_t > 0, we have ϕ^i_t+1 = d^i_t √(n^i_t) and ϕ^j_t+1 = d^j_t √(n^j_t).
Thus, we get
√(n^i_t/n^j_t) = √(n^i_t/n^j_t)·d^i_t/d^j_t·d^j_t/d^i_t = ϕ^i_t+1/ϕ^j_t+1·d^j_t/d^i_t≤ 3 d^j_t/d^i_t≤ 6 d̅^j_t/d_min.
Plugging this back into the expression above, we have
∑_k = 1^n^i_t(π^i_(k)) ≤6(d̅^j_t)^2/d_min√(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
To bound the last two terms, we use the fact that the misspecification test did not trigger in round t. Therefore
u^i_t ≥u^i_t - c√(n_t^i lnMln n^i_t/δ)event
=n^i_t ( u^i_t/n^i_t + c√(lnMln n^i_t/δ/n^i_t) + d^i_t/√(n^i_t)) - 2c√(n_t^i lnMln n^i_t/δ) - d_t^i √(n_t^i)
≥n^i_t/n^j_tu^j_t - √(n^i_t/n^j_t) c√(n^i_t lnMln n^j_t/δ) - 2c√(n_t^i lnMln n^i_t/δ) - d_t^i √(n_t^i)test not triggered
Rearranging terms and plugging this expression in the bound above gives
∑_k = 1^n^i_t(π^i_(k)) ≤6( d̅^j_t)^2/d_min√(n^i_t) + √(n^i_t/n^j_t) c√(n^i_t lnMln n^j_t/δ) + 2c√(n_t^i lnMln n^i_t/δ) + d_t^i √(n_t^i)
≤6(d̅^j_t)^2/d_min√(n^i_t) + 6 d̅^j_t/d_min c√(n^i_t lnMln n^j_t/δ) + 2c√(n_t^i lnMln n^i_t/δ) + d_t^i √(n_t^i)eqn:dn_connection
≤6(d̅^j_t)^2/d_min√(n^i_t) + 6 d̅^j_t/d_min c√(n^i_t lnMln n^j_t/δ) + 2c√(n_t^i lnMln n^i_t/δ) + 3d_t^j √(n_t^j)eqn:dn_connection
≤6(d̅^j_t)^2/d_min√(n^i_t)
+ 3d_t^j √(n_t^j)
+ (6 cd̅^j_t/d_min + 2c)√(n_t^i lnMln t/δ)n^i_t ≤ t
≤6(d̅^j_t)^2/d_min√(n^i_t)
+ 6 d̅_t^j √(n_t^j)
+ (6 cd̅^j_t/d_min + 2c)√(n_t^i lnMln t/δ)lem:dbound
Finally, since t ≤ T and therefore d̅^j_t ≤d̅^j_T and n^j_t ≤ n^j_T (and similarly for i), the statement follows.
*
By lem:highprob, event from def:evente has probability at least 1 - δ. In event , we can apply lem:base_learner_regret for each base learner.
Summing up the bound from that lemma gives
(T) ≤∑_i = 1^M [ 6(d̅^j_T)^2/d_min√(n^i_T)
+ 6 d̅_T^j √(n_T^j)
+ (6 cd̅^j_T/d_min + 2c)√(n_T^i lnMln T/δ)+ 1 + log_2 d̅^i_T/d_min]
≤ 6M d̅^j_T √(T) + M + M log_2 √(T)/d_min + [ 6(d̅^j_T)^2/d_min
+ 4 d̅^j_T/d_min 2c√(lnMln T/δ)]∑_i = 1^M√(n_T^i)
≤( 6√(M)d̅^j_T + 6(d̅^j_T)^2/d_min
+ 8c d̅^j_T/d_min√(lnMln T/δ))√(MT) + M + M log_2 T/d_min.
Plugging in d_min≥ 1 yields
(T) ≤( 6√(M)d̅^j_T + 6(d̅^j_T)^2 + 8c d̅^j_T√(lnMln T/δ))√(MT) + M + M log_2 T
= O ( (M d̅^j_T + √(M)(d̅^j_T)^2 + d̅^j_T √(lnMln T/δ)) √(T)+ M ln(T))
= Õ(d̅^j_T M√(T) + (d̅^j_T)^2 √(MT)) ,
as desired.
§ PROOFS FOR THE ESTIMATING ALGORITHM (ALGORITHM <REF>)
In event , the regret rate estimate in alg:estimatebalancing does not overestimate the current regret rate, that is, for all base learners i ∈ [M] and rounds t ∈, we have
d^i_t ≤ d^i_t.
Note that the algorithm only updates d^i_t when learner i is chosen and only then d^i_t changes. Further, the condition holds initially since d^i_1 = d_min≤ d^i_t. Hence, it is sufficient to show that this condition holds whenever d^i_t is updated.
The algorithm estimates d^i_t as
d^i_t = max{d_min, √(n_t^i)( max_j ∈ [M]û^j_t/n_t^j - c√(lnMln n^j_t/δ/n_t^j)
- û^i_t_t/n_t^i - c√(lnMln n^i_t/δ/n_t^i)) } .
If d^i_t ≤ d_min, then the result holds since by definition d^i_t ≥ d_min. In the other case, we have
d^i_t = √(n_t^i)( max_j ∈ [M]û^j_t/n_t^j - c√(lnMln n^j_t/δ/n_t^j)
- û^i_t/n_t^i - c√(lnMln n^i_t/δ/n_t^i))
≤√(n_t^i)( max_j ∈ [M]u^j_t/n_t^j - u^i_t/n_t^i)event
≤√(n_t^i)( v^⋆ - u^i_t_t/n_t^i)definition of optimal value v^⋆
= n_t^i v^⋆ - u^i_t/√(n_t^i) = ∑_k=1^n_t^i(π^i_(k))/√(n_t^i)regret definition
≤ d^i_t , definition of d^i_t
as claimed.
In event , the balancing potentials ϕ^i_t in alg:estimatebalancing satisfy for all t ∈ and i ∈ [M] where n^i_t ≥ 1
ϕ^i_t+1≤ d^i_t √(n^i_t).
If i ≠ i_t, then ϕ^i_t+1 = ϕ^i_t, d^i_t = d^i_t-1 and n^i_t = n^i_t-1. It is therefore sufficient to only check this condition for i = i_t.
By definition of the balancing potential, we have when i = i_t
ϕ^i_t+1 ≤max{ϕ^i_t, d^i_t √(n^i_t)}≤max{ϕ^i_t, d^i_t √(n^i_t)} ,
where the last inequality holds because of lem:dboundest. If n^i_t = 1, then ϕ^i_t = d_min and d^i_t √(n^i_t)≥ d_min√(1) by definition, and the statement holds. Otherwise, we can assume that ϕ^i_t≤ d^i_t-1√(n^i_t-1) by induction. This gives
ϕ^i_t+1≤max{d^i_t-1√(n^i_t-1), d^i_t √(n^i_t)}.
We notice that d^i_t √(n^i_t) = max{d_min√(n^i_t), ∑_k=1^n^i_t(π^i_(k))}. Since each term inside the max is non-decreasing in t, d^i_t √(n^i_t) is also non-decreasing in t, and therefore ϕ^i_t+1≤ d^i_t √(n^i_t), as anticipated.
In event , for all T ∈ and i ∈ [M], the number of times the balancing potential ϕ^i_t doubled until time T in alg:estimatebalancing is bounded by
log_2 (t max{1, 1 / d_min}).
The balancing potential ϕ^i_t is non-decreasing in t and ϕ^i_1 = d_min. Further, by lem:phibound, we have
ϕ^i_t+1≤ d^i_t √(n^i_t)≤max{d_min√(n^i_t) , n^i_t}.
Thus, the number of times ϕ^i_t can double is at most
log_2 ( max{√(n^i_t) , n^i_t/d_min}) ≤log_2 (t max{1, 1 / d_min}) .
The balancing potentials in alg:estimatebalancing are balanced at all times up to a factor 2, that is,
ϕ^i_t≤ 2 ϕ^j_t for all rounds t ∈ and base learners i, j ∈ [M].
We will show that lem:balancing_abstract_lemma with α = 2 holds when we apply the lemma to F_i(n^i_t-1) = ϕ^i_t.
First F_i(0) = ϕ^i_1 = d_min for all i ∈ [M] and, thus, the initial condition holds. To show the remaining condition, it suffices to show that ϕ^i_t is non-decreasing in t and cannot increase more than a factor of 2 per round. This holds by the clipping in the definition of ϕ^i_t+1 in the algorithm.
In event , the regret of all base learners i is bounded in all rounds T as
∑_k = 1^n^i_T(π^i_(k))
≤2(d^j_t)^2/d_min√(n^i_t) + 2 d^j_t √(n^j_t) + 2c(1 + 2d^j_t/d_min)√(n^i_t lnMln t/δ) + log_2 max{T, T/ d_min} ,
where j ∈ [M] is an arbitrary base learner with n^j_T > 0 and t ≤ T is the last round where i = t_t and ϕ^i_t+1 < 2ϕ^i_t.
Consider fixed base learner i and time horizon T, and let t ≤ T be the last round where i was played and ϕ^i_t did not double, i.e., ϕ^i_t+1 < 2ϕ^i_t. If no such round exists, then set t = 0. By lem:est_num_double, i can be played at most log_2 (T max{1, 1 / d_min}) times between t and T and thus
∑_k = 1^n^i_T(π^i_(k)) ≤∑_k = 1^n^i_t(π^i_(k)) + log_2 (T max{1, 1 / d_min}).
If t = 0, then the desired statement holds. Thus, it remains to bound the first term above when t > 0. We can write the regret of base learner i up to t in terms of the regret of any learner j with n^j_t > 0 as follows:
∑_k = 1^n^i_t(π^i_(k)) = n^i_t v^⋆ - u^i_t definition of regret
= n^i_t v^⋆ - n^i_t/n^j_tu^j_t + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( n^j_t v^⋆ - u^j_t) + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( ∑_k = 1^n^j_t(π^j_(k))) + n^i_t/n^j_tu^j_t - u^i_t definition of regret
≤n^i_t/n^j_t(d^j_t √(n^j_t)) + n^i_t/n^j_tu^j_t - u^i_t definition of regret rate
≤√(n^i_t/n^j_t) d^j_t √(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
We now use the balancing condition in lem:estimating_balanced to bound the first factor √(n^i_t / n^j_t). This condition gives that ϕ^i_t+1≤ 2ϕ^j_t+1.
Since ϕ^i_t+1 < 2ϕ^i_t and, thus, the balancing potential was not clipped from above, we have ϕ^i_t+1≥d^i_t √(n^i_t). Further,
since n^j_t > 0 we can apply lem:phibound to get ϕ^j_t+1≤ d^j_t √(n^j_t).
Thus, we get
√(n^i_t/n^j_t) = √(n^i_t/n^j_t)·d^i_t/d^j_t· d^j_t/d^i_t≤ϕ^i_t+1/ϕ^j_t+1·d^j_t/d^i_t≤ 2 d^j_t/d^i_t≤ 2 d^j_t/d_min.
Plugging this back into the expression above, we have
∑_k = 1^n^i_t(π^i_(k)) ≤2(d^j_t)^2/d_min√(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
To bound the last two terms, we use the regret coefficient estimate:
n^i_t/n^j_tu^j_t - u^i_t
= n^i_t (u^j_t/n^j_t - u^i_t/n^i_t)
≤ n^i_t (û^j_t/n^j_t - û^i_t/n^i_t)
+ c√(n^i_t lnMln n^i_t/δ) + c n^i_t √(lnMln n^j_t/δ/n_t^j)event
= n^i_t (û^j_t/n^j_t - c √(lnMln n^j_t/δ/n_t^j) - û^i_t/n^i_t - c√(lnMln n^i_t/δ/n_t^i))
+ 2c√(n^i_t lnMln n^i_t/δ) + 2c n^i_t √(lnMln n^j_t/δ/n_t^j)
≤d^i_t √(n^i_t)
+ 2c√(n^i_t lnMln n^i_t/δ) + 2c n^i_t √(lnMln n^j_t/δ/n_t^j)definition of d^i_t
≤d^i_t √(n^i_t) + 2c(1 + √(n^i_t/n^j_t))√(n^i_t lnMln t/δ)n^i_t ≤ t and n^j_t ≤ t
≤d^i_t √(n^i_t) + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ)eqn:dn_connection_est
≤ϕ^i_t+1 + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ)ϕ^i_t+1≥d^i_t √(n^i_t)
≤ 2ϕ^j_t+1 + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ)lem:estimating_balanced
≤ 2 d^j_t √(n^j_t) + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ). lem:phibound
Plugging this back into the expression above, we get the desired statement:
∑_k = 1^n^i_T(π^i_(k))
≤2(d^j_t)^2/d_min√(n^i_t) + 2 d^j_t √(n^j_t) + 2c(1 + 2d^j_t/d_min)√(n^i_t lnMln t/δ) + log_2 max{T, T/ d_min} .
*
By lem:highprob, event from def:evente has probability at least 1 - δ. In event , we can apply lem:base_learner_regret_est for each base learner.
Summing up the bound with j = _i' ∈ [M]max_i d^j_T_i' from that lemma gives
(T) ≤∑_i = 1^M [ 2(d^j_T_i)^2/d_min√(n^i_T_i) + 2 d^j_T_i√(n^j_T_i) + 2c(1 + 2d^j_T_i/d_min)√(n^i_T_ilnMln T/δ) + log_2 max{T, T/ d_min}]
≤ 2M d^⋆_T √(T) + M log_2 max{T, T/ d_min} + [ 2(d^⋆_T)^2/d_min
+ 6 d^⋆_T/d_min c√(lnMln T/δ)]∑_i = 1^M√(n_T^i)
≤( 2√(M)d̅^j_T + 2(d̅^⋆_T)^2/d_min
+ 6c d^⋆_T/d_min√(lnMln T/δ))√(MT) + M log_2 max{T, T/ d_min}.
Plugging in d_min≥ 1 gives
(T) ≤( 2√(M) d^⋆_T + 2(d^⋆_T)^2 + 6c d^⋆_T√(lnMln T/δ))√(MT) + M log_2 T
= O ( (M d^⋆_T + √(M)(d̅^⋆_T)^2 + d^j_T √(lnMln T/δ)) √(T)+ M ln(T))
= Õ( d^⋆_T M√(T) + (d^⋆_T)^2 √(MT)) ,
as claimed.
§ EXPERIMENTAL DETAILS
§.§ Meta-Learners
We now list the meta-learners used in our experiments.
Corral. We used the Corral Algorithm as described in <cit.> and <cit.>. Since we work with stochastic base algorithms we use the Stochastic Corral version of <cit.> where the base algorithms are updated with the observed reward r_t instead of the importance sampling version required by the original Corral algorithm of <cit.>. The pseudo-code is in Algorithm <ref>. In accordance with theoretical results we set η = Θ(1/√(T) ). We test the performance of the Corral meta-algorithm with different settings of the initial learning rate η∈{ .1/√(T), 1/√(T), 10/√(T)}. In the table and plots below we call them CorralLow, Corral and CorralHigh respectively. In tab:exp3_overview_appendix we compare their performance on different experiment benchmarks. We see Corral and CorralHigh achieve a better formance than CorralLow. The performance of Corral and CorralHigh is similar.
EXP3. At the beginning of each time step the EXP3 meta-algorithm samples a base learner index i_t ∼ p_t from its base learner distribution p_t. The meta-algorithm maintains importance weighted estimator of the cumulative rewards for each base learner R_t^i for all i ∈ [M]. After receiving feedback r_t from base learner i_t the importance weighted estimators are updated as R_t+1^i = R_t^i + 1(i = i_t) r_t/p_t^i_t. The distribution p_t+1^i = (1-γ)exp( η R_t+1^i )/∑_i'exp(η R_t+1^i') + γ/M where η is a and γ are a learning rate and exploration parameters. In accordance with theoretical results (see for example <cit.>) in our experiments we set the learning rate to η = √(log(M)/MT) and set the forced exploration parameter γ = 0.1/√(T). We test the performance of the EXP3 meta-algorithm with different settings of the forced exploration parameter γ∈{0, .1/√(T), 1/√(T)}. In tab:exp3_overview_appendix we call them EXP3Low, EXP3 and EXP3High. All these different variants have a similar performance.
Greedy. This is a pure exploitation meta-learner. After playing each base learner at least once, the Greedy meta-algorithm maintains the same cumulative reward statistics {u^i_t }_i ∈ [M] as D^3RB and ED^2RB. The base learner i_t chosen at time t is i_t = _i∈ [M]u^i_t/n_t^i.
UCB. We use the same UCB algorithm as described in sec:running_example. We set the scaling parameter c = 1.
D^3RB and ED^2RB. These are the algorithms in Algorithm <ref> and <ref>. We set therein c = 1 and d_min = 1.
§.§ Base Learners
All base learners have essentially been described, except for the Linear Thompson Sampling Algorithm (LinTS) algorithm, which was used in all our linear experiments.
In our implementation we use the algorithm described as in <cit.>. On round t the Linear Thompson Sampling algorithm has played x_1, ⋯ x_t-1⊂ℝ^d with observed responses r_1, ⋯, r_t-1. The rewards are assumed to be of the form r_ℓ = x_ℓ^⊤θ_⋆ + ξ_t for an unknown vector θ_⋆ and a conditionally zero mean random variable ξ_t. An empirical model of the unknown vector θ_⋆ is produced by fitting a ridge regression least squares estimator θ_t = _θλθ^2 + ∑_ℓ=1^t-1 ( x_ℓ^⊤θ - r_ℓ)^2 for a user specified parameter λ > 0. This can be written in closed form as θ_t = ( 𝐗^⊤𝐗 + λ𝕀)^-1𝐗^⊤ y where 𝐗∈ℝ^t-1× d matrix where row ℓ equals x_ℓ. At time t a sample model is computed θ_t= θ_t + c √(d)( 𝐗^⊤𝐗 + λ𝕀)^-1/2η_t where η_t ∼𝒩(0, 𝕀) and c > 0 is a confidence scaling parameter. This is one of the parameters that we vary in our experiments. If the action set at time t equals 𝒜_t (in the contextual setting 𝒜_t changes every time-step while in the fixed action set linear bandits case it ) the action x_t = _ x ∈𝒜_t x_t^⊤θ_t. In our experiments λ = 1 and θ_⋆ is set to a scaled version of the vector (0, ⋯, d-1). In the detailed experiment description below we specify the precise value of θ_⋆ in each experiment.
§.§ Detailed Experiments Description
Figure <ref> illustrates the overall structure of our experiments. Experiments 1 through 6 are those also reported in the main body of the paper. The below table contains a detailed description of each experiment, together with the associated evidence in the form of learning curves (regret scale vs. rounds). Finally, Table <ref> contains the final (average) cumulative regret for each meta-learner on each experiment.
|
http://arxiv.org/abs/2306.10141v1
|
20230616185400
|
Modave Lecture Notes on de Sitter Space & Holography
|
[
"Damian A. Galante"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] |
[1][]
nobeforeafter, math upper, tcbox raise base,
enhanced, colframe=blue!30!black,
colback=blue!30, boxrule=1pt,
#1
theoremTheorem[section]
theo[section]
theo[2][]
theo
#1
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=blue!20]
Theorem ;
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=blue!20]
Box : #1;
innertopmargin=10pt,linecolor=blue!20,
linewidth=2pt,topline=true,
frametitleaboveskip=-
[]
[a]Damián A. Galante
[a]King's College London, the Strand, London WC2R 2LS, UK
[email protected]
These lecture notes provide an overview of different aspects of de Sitter space and their plausible holographic interpretations. We start with a general description of the classical spacetime. We note the existence of a cosmological horizon and its associated thermodynamic quantities, such as the Gibbons-Hawking entropy. We discuss geodesics and shockwave solutions, that might play a role in a holographic description of de Sitter. Finally, we discuss different approaches to quantum theories of de Sitter space, with an emphasis on recent developments in static patch holography.
Please send comments, typos or corrections to my email address.
Modave Summer School in Mathematical Physics (Modave2022)
5-9 September, 2022
Modave, Belgium
Modave Lecture Notes on de Sitter Space & Holography
*
July 31, 2023
====================================================
§ FOREWORD
tocsectionForeword
plain
These notes are an extended version of the lectures I gave in the XVIII Modave Summer School in Mathematical Physics in September, 2022. They are oriented primarily to PhD students in theoretical physics, who do not necessarily work on gravity or holography.
I was initially asked to talk about “holography in de Sitter space". However, as you can see from the title, the topic has been slightly changed. Despite many recent developments in understanding quantum features of de Sitter (dS) space, we still lack a full framework. Most of these lectures are devoted to explain the reason for this. In that endeavour, I decided to focus on certain peculiar features of de Sitter space and contrast them with their analogous anti-de Sitter and/or black hole versions.
The lectures are divided into six chapters. The first one is mostly introductory and motivational, summarising the vast experimental evidence we have to date of two cosmological periods of accelerated expansion. The second chapter provides an overview of the geometry of dS space at a classical level. The third one deals with thermodynamic properties of the cosmological horizon. The fourth and fifth study two different probes that have been very useful in the context of holography in Anti-de Sitter (AdS): geodesics and shockwave solutions, respectively.
The expert reader can probably skip the first five chapters and move directly to the last one, where I intend to summarise recent developments and proposals for dS holography. The sixth chapter starts by reviewing quantum field theory in a fixed dS background and the dS/CFT correspondence. I then focus on recent ideas regarding static patch holography, including the stretched horizon, a discussion on the role of timelike boundaries, the TT̅ + Λ_2 construction and dS holography in two dimensions. I tried to compile a comprehensive list of recent references on these subjects. But this, of course, can only be a partial selection of topics and references related to quantum aspects of dS space. Other very interesting ones, such as, for instance, inflation, the wavefunction of the Universe and infrared divergences are not discussed here.
During the actual lectures in Modave, I spent considerable time discussing scalar field theory in a fixed dS background. I shortened that discussion in the present notes, considering this has already been properly reviewed in other places. See, for instance, <cit.> for excellent reviews on the subject. For classical aspects of the geometry <cit.> provides a nice overview, while <cit.> provides a summary of quantum problems involving the cosmological horizon. There are certainly many other useful reviews on the subject.
While, of course, these notes may overlap with some of the other reviews at different points, my intention is to provide an updated look into the subject. Special emphasis is given to tools and features that have recently been particularly successful in the context of AdS holography. Hopefully, these will also play some role in a modern understanding of the quantum nature of de Sitter space.
Hope you enjoy!
§ INTRODUCTION TO DE SITTER SPACE
As mathematical physicists, we probably do not need much of a motivation to study either de Sitter or Anti-de Sitter spaces. Together with flat space, these are the three maximally symmetric spacetimes with positive, negative or zero cosmological constant Λ, and as such, they provide a rich mathematical structure to discuss both classical and quantum foundational issues in gravity. In fact, textbooks like <cit.> already provide a basic treatment of both (A)dS.
However, the role of both (A)dS changed dramatically, and for different reasons, in 1998. On the theoretical side, the first concrete realisation of a holographic picture in gravity was derived for asymptotically AdS spacetimes <cit.>. On the experimental side, astrophysical observations of supernovae <cit.> showed that the Universe is currently expanding at an accelerated rate, indicating that our cosmological constant is small, but positive.
Observational evidence. We also have evidence that our Universe underwent a period of accelerated expansion at the very beginning of time. Experiments like COBE detected a mostly isotropic Cosmic Microwave Background (CMB) in the form of black body radiation at a temperature of T=2.73K, with relative fluctuations of the order of 10^-5. But it was only in 2003 that the WMAP experiment managed to measure a nearly scale-invariant spectrum, that was consistent with cosmic inflation models. This provided supporting evidence for this theory of accelerated expansion during the first instances of the Universe. See, for instance, <cit.>. The Planck satellite even improved this measurement in 2013, see figure <ref>.
Apart from the inflationary era, we now have at least three different types of experiments supporting a current cosmological era of accelerated expansion. The first one is the already mentioned measurement of supernovae (SNe). The data from 1998 has been updated with more than 500 observations from 19 different datasets that show that large redshift supernovae appear farther away than they should if there were no accelerated expansion <cit.>. See figure <ref>. The data from the CMB can also be used to constrain the value of the cosmological constant. And finally, there are measurements from Baryon Acoustic Oscillations (BAO). BAOs are fluctuations in the density of baryonic matter that can be seen in our Universe and were caused by acoustic waves in the primordial plasma of the early Universe. They were first observed by the Sloan Digital Sky Survey <cit.> and the 2dF Galaxy Redshift Survey <cit.> in 2005 and by comparing with CMB data, they provide another measurement of the cosmological constant. Neither of these three alone provides definite certainty of a positive cosmological constant, but combined together they constitute fairly convincing evidence that we currently live in a spatially flat Universe dominated by a positive cosmological constant with Ω_Λ∼ 0.7 <cit.>, see figure <ref>.
A connection to holography. Even if our Universe is not exactly dS at the moment, cosmic no hair theorems <cit.> predict that in (most) cosmological scenarios, as time evolves, all other matter and energy content in the Universe will dilute and we will be asymptotically approaching a locally dS geometry. Current measures give a very small value of the cosmological constant,
Λ∼ 10^-52 m^-2∼ 10^-122ℓ_P^-2 ,
where ℓ_P is the Planck length. This famously differs from the theoretical effective field theory expectation by around 122 orders of magnitude <cit.>, and constitutes what is known as the cosmological constant problem. Understanding quantum features of spacetime might provide a way of addressing this discrepancy. One complication is that it has been hard to realise universes with positive cosmological constant in string theory <cit.>.
However, as we will review in section <ref>, observers in an ever-accelerating spacetime are surrounded by cosmological event horizons. It has been proposed that the cosmological horizon in four-dimensional dS carries an entropy given by <cit.>
[box=[drop lifted shadow]]equation
S = 3π/Λ k_B c^3/ħG_N ,
that if interpreted in the statistical mechanics sense, would bind the value of the cosmological constant to the microscopic structure of the Universe. Moreover, the fact that this formula is an area entropy formula (as in the black hole case) also points towards a holographic principle for this type of spacetime. In fact, the traditional arguments towards the realisation of holography in gravity do not rely on the value or sign of the cosmological constant <cit.>. A holographic description of dS space is then highly desirable. But let us start from the very beginning.
§ THE BASICS OF DE SITTER SPACE
We will first review the classical geometry of dS spacetime. Most of this section can also be read in, for instance, <cit.>. The easiest way to visualise de Sitter (dS) space is through the embedding picture. Consider Minkowski spacetime in (d+1) dimensions, ℳ^d+1. In our conventions, the Minkowski metric is given by
ds^2_ℳ^d+1 = -dX_0^2 + dX_1^2 + ⋯ + dX_d^2 .
Then dS space in d dimensions is realised as the following hypersurface embedded in ℳ^d+1,
[box=[drop lifted shadow]]equation
-X_0^2 + X_1^2 + ⋯+ X_d^2 = ℓ^2 ,
where ℓ is called the curvature scale or the dS radius. It is easy to see that this equation defines a hyperboloid in ℳ^d+1, shown in figure <ref>. It is also straightforward to realise that the set of coordinate transformations that leave (<ref>) unchanged is given by the SO(d,1) group, which is then the group of isometries of dS space in d dimensions. Note that this is the Euclidean conformal group in (d-1) dimensions.
De Sitter space is the maximally symmetric Einstein manifold with positive curvature, so it satisfies
R_μν - 1/2 R g_μν + Λ g_μν = 0 , with Λ = (d-2)(d-1)/2ℓ^2 ,
where Λ is a positive cosmological constant and d>2. In d=2 dilaton-gravity theories, the cosmological constant is usually set to Λ = ℓ^-2. For most of these lectures we will work in general spacetime dimension d. Though the case of d=4 is, of course, the most relevant to our Universe, we will sometimes go to d=2,3 for simplicity.
[Anti de Sitter]theo:theo2
Anti-de Sitter (AdS) can also be viewed as a hyperboloid embedded in a higher dimensional manifold, but a different one. AdS in d dimensions is given by
-X_0^2 -X_d^2 + X_1^2 + ⋯ + X_d-1^2 = - R^2 ,
where now R is the AdS radius and this surface is embedded in ℝ^2,d-1, ds^2 = -dX_0^2 -dX_d^2 +dX_1^2 + ⋯ + dX_d-1^2. Famously, the AdS isometries form the group SO(d-1,2), that is the conformal group in (d-1) dimensions and the first hint towards the AdS/CFT correspondence.
In what follows, we will describe dS with different coordinate systems that cover different parts of the whole hyperboloid.
§.§ Coordinate systems
For now, we will set ℓ = 1. It will be convenient to define coordinates ω^i on the unit sphere S^d-1, such that ∑_i=1^d (ω^i)^2 = 1. When in d=2,3 we consider a spatial circle, we choose an angular coordinate φ∈ (0,2π] to parameterise it.
§.§.§ Global coordinates
The first coordinates that we will study cover the full hyperboloid and are called global coordinates, {τ, ω^i}. The coordinate τ is usually called the global time and the embedding is given by
X^0 = sinhτ ,
X^i = coshτ ω^i .
It is straightforward to verify that this satisfies (<ref>) with ℓ=1. Plugging this into (<ref>), we obtain the induced metric on the hyperboloid, or the global dS metric,
[box=[drop lifted shadow]]equation
ds^2 = -dτ^2 + cosh^2 τ dΩ_d-1^2 ,
where dΩ_d-1^2 is the metric on the S^d-1 and τ∈ [-∞, ∞]. Constant time slices are then compact. For τ>0, this is the typical picture of a closed Universe whose size is expanding exponentially as time evolves forward. The minimal size of the sphere is at τ = 0, where the radius of the sphere is one (in units of the dS radius). Note that this metric depends explicitly on the global time; dS does not have a global timelike Killing vector. We will explore some consequences of this later on.
§.§.§ Conformal coordinates and Penrose diagram
To look at the causal structure of dS space, it is convenient to define a conformal time T such that cos T = cosh^-1τ. From here, it follows that -π/2 ≤ T ≤π/2 and the metric becomes,
ds^2 = 1/cos^2 T( -dT^2 + dθ^2 + cos^2 θ dΩ_d-2^2 ) .
This metric is useful to find the Penrose diagram as null rays in the θ-direction travel at 45 degrees angles. The coordinate θ now spans from -π/2 to π/2. In the Penrose diagram, we only draw the T and θ coordinates, so it is quite clear that the Penrose diagram of dS space becomes a square, as appears in figure <ref>.
Each horizontal line in the diagram (in blue) corresponds to a (d-1) sphere, whose radius is given by cos^-1T. Each point in each line corresponds to a (d-2) sphere with the exception of both vertical edges, which correspond to θ = ±π/2, and thus are not spheres, but single points. We usually call those points the North and South pole of the sphere, and we like to think about inertial observers sitting at those points.
All null rays start (and end) at the infinite past (future) of the dS space, that we call ℐ^- (ℐ^+). These conformal boundaries are the lower and upper borders of the Penrose diagram (in green) and correspond to slices of infinite size. Observers take infinite proper time to reach ℐ^+.
[Double sided AdS black hole]prf:adsbh
Note that the dS Penrose diagram is identical to the Penrose diagram of the double sided AdS_d black hole (at least for d=2,3), whose metric in d=3 is given by
ds^2 = -(r^2 - r_h^2) dt^2 + dr^2/(r^2-r_h^2) + r^2 dφ^2 .
The black hole horizon is at r = r_h. However, the two spacetimes are quite different. In particular, if we look at the size of the circle in the AdS case, it grows towards the boundary at r→∞, while in the dS case, it shrinks to zero size. Sometimes people like to include small arrows in the direction of growth of the compact sphere to distinguish the two diagrams. The double-sided AdS black hole plays a prominent role in the AdS/CFT correspondence as it is identified as dual to the thermofield double state in the boundary theory <cit.>.
§.§.§ Static coordinates
A very important aspect of dS space is that no single observer has access to the full spacetime. This is clear by just looking at the Penrose diagram. For instance, a lightray emerging from the North pole at ℐ^-, will only reach the South pole in the infinite future ℐ^+.
An important set of coordinates are those that describe the region accessible to a single observer. This is the intersection between the region of space that can affect the observer and the region that can be affected by them, see figure <ref>.
In terms of embedding coordinates, it is described by
X^0 = √(1-r^2)sinh t ,
X^i = r ω^i ,
X^d = √(1-r^2)cosh t ,
where i= 1, ⋯, d-1. The resulting metric is the so-called static patch metric,
[box=[drop lifted shadow]]equation
ds^2 = - (1-r^2) dt^2 + dr^2/1-r^2 + r^2 dΩ_d-2^2 ,
where r ∈ [0,1]. There are a number of important observations about this metric:
* As the name suggests, this is a static metric; there is no explicit time dependence on the metric. Thus, ∂_t is a timelike Killing vector.
* At r=1 (ℓ, if we reinsert the dS length) the norm of the timelike Killing vector vanishes. The r=1 surface is a null surface that surrounds the observer at all times. This is what we call the cosmological event horizon.
* We can continue the coordinates for r > 1, where the Killing vector becomes spacelike. The situation is similar to an inside out black hole.
It is important to note that the existence of this cosmological horizon is a direct consequence of the accelerated expansion of dS space and the finite propagation of the speed of light. There are no singularities or matter in empty dS space but there is still a cosmological horizon. One of the aims of these lectures is to discuss similarities and differences with the usual black hole horizon.
Note that every inertial observer in dS is surrounded by a cosmological horizon. In this sense, it is said that the cosmological horizon is observer dependent, as opposed to the black hole case. It is also the case that observers cannot get rid of their cosmological horizon, making semiclassical processes such as the horizon evaporation extremely subtle <cit.>.
Another difference with the black hole horizon is that the region inside the static patch remains always finite. Comparing (<ref>) to (<ref>), we see that the size of the compact space grows to infinite size in the black hole case, while in the static patch geometry it never gets larger than the dS length. As we will discuss later, this imposes severe constraints on how to define observables in the static patch, as usually in gravity we make use of an asymptotic boundary (such as the null boundary of asymptotically flat spacetimes or the timelike boundary of asymptotically AdS spacetimes).
§.§.§ Other coordinates
There are other sets of coordinates that are useful for different purposes. Here, we just point out some other coordinates that will appear throughout these lecture notes, and refer the reader to <cit.> in order to obtain them from the embedding picture.
First, we introduce the planar coordinate system, that covers half of the Penrose diagram, as shown in figure <ref>, and is given by
ds^2 = -dη^2 + dx_d-1^2/η^2 ,
where η is usually called the conformal time. This is usually the preferred frame for the computation of cosmological correlators <cit.>. Note the similarities with the AdS Poincaré patch. In fact, one can obtain the Euclidean AdS Poincaré patch by analytic continuation of the conformal time and the dS length. This can be useful to relate certain computations in Euclidean AdS to dS. See, for instance, <cit.>.
Another set of coordinates, which will be useful to study shockwaves later on, are the Kruskal coordinates. These cover the full Penrose diagram and the metric is given by
ds^2 = 1/(1- UV)^2(-4 dUdV + (1+UV)^2 dΩ_d-2^2 ) ,
where UV ∈ [-1,1] are null coordinates. The horizons are at UV = 0, past and future infinities are at UV = 1 and the North and South poles are at UV = -1.
To finish this section, we present the dS/dS patch, where we foliate dS_d space with dS_d-1 slices. This is useful for a proposal regarding dS holography that we will briefly discuss in the last chapter. For now, the metric is given by,
ds^2 = dω̃^2 + sin^2 ω̃( -dτ̃^2 + cosh^2 τ̃ dΩ_d-2^2 ) ,
where ω̃∈ [0,π] and τ̃∈ℝ. This set of coordinates covers the central diamond of the Penrose diagram, as shown in figure <ref>, ending at horizons for ω̃ = 0,π.
§.§ Euclidean de Sitter space - the sphere
In the next chapter, we will study some properties of Euclidean de Sitter space. Consider dS space in global coordinates. Analytically continuing τ→ - i τ_E takes the metric in (<ref>) to
ds^2 = dτ_E^2 + cos^2 τ_E dΩ_d-1^2 ,
which is the round metric on the d-dimensional sphere, S^d. It is interesting to note that the analytic continuation of the static patch time, t → - i t_E in (<ref>), also takes you to S^d, in a different foliation,
ds^2 = (1-r^2) dt_E^2 + dr^2/1-r^2 + r^2 dΩ_d-2^2 .
The same is true for the dS/dS patch upon taking τ̃→ -i τ̃_̃Ẽ, so the sphere plays a predominant role when using Euclidean techniques to study dS space.
§.§ Schwarzschild de Sitter black holes
There are also black hole solutions to the Einstein equations with positive cosmological constant. The simplest of them is the Schwarzschild de Sitter (SdS) black hole, that in 4 dimensions is given by the metric element
ds^2 = - (1-r^2 - 2M/r) dt^2 + dr^2/(1-r^2 - 2M/r) + r^2 dΩ_2^2 .
The Penrose diagram for the SdS space can be seen in figure <ref>.
The SdS geometry has horizons when the g_tt component vanishes, which requires solving the cubic equation
1 - r^2 - 2M/r = 0 .
Notice that this equation only has positive real roots for certain values of M. For M=0, of course, we recover the cosmological horizon of pure dS space. As we increase the value of M, we find two positive real solutions that correspond to a cosmological (r_c) and a black hole (r_bh) horizon. The expressions for r_c and r_bh can be found analytically and are shown in figure <ref>. Note that r_c ≥ r_bh. As M increases both solutions start getting closer to each other, up until M= M_max≡ 3^-3/2. The two horizons now have the same radius, r_c = r_bh = 1/√(3)≡ r_max. For larger values of M, there are no positive real solutions to (<ref>).
This is yet another important difference with respect to Schwarzschild black holes with Λ≤ 0, where solutions exist for all values of the mass; in dS, it is impossible to have arbitrarily large values of M.
§.§ The Nariai limit and dS_2
An interesting phenomenon occurs as r_bh goes to r_c. This is known as the Nariai limit (the solution with M = M_max is also known as the Nariai black hole).
Consider some value of M that is close to M_max, so that the positive roots of (<ref>) are r_c = r_max + ε, r_bh = r_max - ε, for some small ε≪ 1. Now we can expand the metric in between the horizons by choosing coordinates {τ, ρ} such that
r = r_max + ερ , t = τ/ε .
To get the near horizon geometry, we expand (<ref>) to leading order in ε to obtain,
ds^2 = - 3 (1-ρ^2) dτ^2 + dρ^2/3(1-ρ^2) + 1/3 dΩ_2^2 ,
which is the metric of dS_2 × S^2, with fixed radius ℓ = r_max. Note that in this coordinate system, ρ spans from -1 to 1, where dS_2 has two horizons. In this sense, dS_2 is closer to the higher dimensional SdS geometry than to higher dimensional pure dS. In fact, the Penrose diagram of dS_2 looks like the SdS Penrose diagram, as can be seen in figure <ref>.
It is interesting to note that the Nariai solution is actually an exact solution to the Einstein equations with a positive cosmological constant in four dimensions. Another interesting fact is that upon Wick rotating time, the topology of the solution changes with respect to the pure dS solution from S^4 → S^2 × S^2.
Finally, it is worth mentioning that this near-horizon geometry is reminiscent to the near-horizon geometry of extremal (asymptotically flat or AdS) black holes that have an AdS_2 × S^2 throat. The difference is that in the dS case, we do not need charge or angular momentum for this near horizon limit to exist.
§ THERMODYNAMICS OF DE SITTER SPACE
Now that we know that observers in dS are surrounded by cosmological event horizons, we can study some of the properties of these horizons, pretty much in the same way as we do with black hole event horizons. This study was started in 1977 by Gibbons and Hawking in two seminal papers <cit.>. Following those ideas, in this chapter we will find out that cosmological horizons have a temperature, an entropy obeying an area law and a particular first law of thermodynamics.
§.§ The de Sitter temperature
As in the case of black holes, there are many different ways to find the temperature of the cosmological horizon. It can be shown that an observer with a detector in the static patch will observe a background of thermal radiation coming from the cosmological horizon. This is nicely reviewed in <cit.>. Here, we stick to a purely geometrical argument in which we show that in order for the static patch geometry to be smooth in Euclidean signature, the Euclidean time needs to be periodic with a period that we identify with the inverse temperature.
Recall the metric of the static patch. Reinserting the dependence on the dS radius and taking t → - i t_E, we obtain
ds^2 = ( 1 - r^2/ℓ^2) dt_E^2 + dr^2/( 1 - r^2/ℓ^2) + r^2 dΩ_d-2^2 .
The idea is to look at this metric close to the horizon radius. For this, we change coordinates to ε = ℓ - r. To leading order in ε/ℓ, the metric (<ref>) becomes
ds^2 ≈2 ε/ℓ dt_E^2 + dε^2/2ε/ℓ + ℓ^2 dΩ_d-2^2 .
Further considering the following coordinate change,
R ≡√(2 εℓ) , Θ≡ t_E/ℓ ,
leaves the metric as
ds^2 = R^2 dΘ^2 +dR^2 + ℓ^2 dΩ_d-2^2 .
The first two components of the metric look exactly like flat space in polar coordinates, but for this to be completely true, Θ needs to have the right periodicity to avoid a conical singularity at the origin. Thus, we need Θ∼Θ + 2π, which leaves us with t_E ∼ t_E + 2πℓ. As in statistical mechanics, we identify the period of imaginary time with the inverse temperature, so we find that
[box=[drop lifted shadow]]equation
T_dS = 1/2πℓ .
We then reach the conclusion that, as black holes, cosmological horizons also have a temperature associated to them. Note, however, that this temperature is fixed by the dS length, so it cannot be varied as in the case of black holes.
In dS, the observer cannot get rid of their horizon, so it has to be the case that the Hawking radiation of the horizon is in thermal equilibrium with its surrounding so that overall there is no evaporation. This is reminiscent of the case of eternal black holes in AdS.
Finally, considering the observed value of the cosmological constant in our own Universe, we can estimate the dS length to be around 16 billion lightyears, which will give an extremely low dS temperature of T_dS≈ 10^-30 K.
[Black hole temperature]prf:bhtemp
The result obtained is just a particular case of a more general family of f(r) metrics that includes the asymptotically flat and asymptotically AdS Schwarzchild black holes.
Suppose we start with a metric of the form
ds^2 = -f(r) dt^2 + dr^2/f(r) + r^2 dΩ_d-2^2 ,
where we assume that f(r_h)=0, and r_h is the largest simple root of f(r). Following an analogous procedure (Wick rotating time, expanding the metric close to the horizon and requiring smoothness of the geometry), we obtain that the temperature of the solution is given by
T_f(r) = |f'(r_h)|/4π ,
which is also consistent with (<ref>).
§.§ The de Sitter entropy
As with the temperature, there are different ways of computing the entropy of the cosmological horizon. In this section, we reproduce the Euclidean path integral computation, as was proposed in the second of the Gibbons-Hawking papers <cit.>. We will set d=4 and consider the Euclidean Einstein-Hilbert action with a positive cosmological constant,
I_E[g] = -1/16π G_N∫ d^4x √(g)(R-2Λ) ,
where G_N is the gravitational Newton constant. The proposal of Gibbons and Hawking is to consider the following path integral
Z = ∫ Dg exp (-I_E[g]) .
In principle, matter fields could be added to this path integral, but in these notes we will restrict ourselves to the purely gravitational case. See, for instance, <cit.> for more general cases. The path integral, in principle, should be taken over all compact smooth geometries. As we will soon see, the round metric on the 4-sphere S^4 is a saddle-point solution. But we expect the path integral to be also summing over geometries with different topologies such as for instance, the Nariai solution, that has topology S^2× S^2 in Euclidean signature, see section <ref>.
The fact that we consider solutions with no boundaries is notorious. In fact, it is again quite different to what happens in the black hole case, where we usually consider the path integral as a function of the boundary thermal circle size β, that we interpret as the inverse temperature. See box <ref> and Appendix <ref>.
Before continuing, we should point out that this object needs to be treated with some care as it is well-known that the gravitational path integral is somehow pathological. Gravity is a non-renormalisable theory and, among others, there is the problem of the metric conformal mode being unbounded <cit.>. Of course these issues must be addressed in a full theory of quantum gravity.
For now, what we are going to do is to take a saddle-point approximation to the path integral, by taking G_N → 0. The dominant saddle is the round metric on S^4. The only parameter left to fix is its radius ℓ_0, that we need to find so as to extremise the on-shell action.
Once again, we can consider the metric (<ref>), for which the Ricci scalar can be computed as R = 12/ℓ^2. Then the Euclidean action as a function of ℓ is given by
I_E[ℓ] = -1/16π G_N∫_0^β dt_E ∫_0^ℓ dr ∫ dΩ_2 √(g)(R-2Λ) = -π/6G_N (12ℓ^2 - 2 Λℓ^4) .
Extremising the action with respect to the radius gives
∂_ℓ I_E[ℓ] |_ℓ=ℓ_0 = 0 → 24 ℓ_0 - 8 Λℓ_0^3 = 0 →ℓ_0^2 = 3/Λ .
So here we recover again the result exhibited in the first chapter, see (<ref>) for d=4. Now plugging in the on-shell value for the cosmological constant, we find that
I_E[ℓ_0] = - πℓ_0^2/G_N = - A_H/4G_N ,
where A_H is the area of the cosmological horizon. Usually, we interpret I_E as F/T, where F is the free energy of the system. But we know using thermodynamic relations that F/T = E/T - S, where now E and S are the energy and the entropy. However, as pointed out by Gibbons and Hawking, in dS there cannot be any energy, as energy is a boundary term in General Relativity and there is no boundary to define it. In this sense, it looks like we are computing a microcanonical partition function, as we are fixing the energy to be zero. And so, basically, what we computed is just minus what is called the Gibbons-Hawking entropy,
[box=[drop lifted shadow]]equation
S_GH = A_H/4G_N .
Understanding the microscopic origin (if there is one) of this formula and its quantum corrections might be one of the keys to understand holography in dS spacetime. In fact, it was probably the analogous formula for black holes which served as one of the main motivations towards what we now know as the AdS/CFT correspondence <cit.>. A few comments about this formula:
* First, if we reinsert all the natural constants, the formula for the dS entropy in four dimensions becomes the one presented in (<ref>),
S_GH = 3π/Λk_B c^3/ħ G_N ,
which relates the entropy of the cosmological horizon to all the fundamental constants in nature, linking this object to statistical mechanics (through Boltzmann constant k_B), relativity (through the speed of light c), quantum mechanics (through ħ), gravity (G_N) and cosmology (Λ). The dS temperature, for instance, is similar, but it does not include gravity as it does not have G_N (nor k_B).
* One of the big open questions is whether this quantity is really an entropy. If it is, according to Boltzmann, it should be counting a number of microstates. But, what is this formula counting? This is even more mysterious than in the black hole case, because the cosmological horizon is observer dependent and each observer has their own horizon with their own entropy. So what are we counting? Can it be an entanglement entropy <cit.>?
* It is huge. If we plug our current value for the cosmological constant (and the other constants in nature that we usually set to 1), we obtain that S_GH∼ 10^122 k_B. The entropy of all the matter and energy content in our visible Universe – that is dominated by the masses of supermassive black holes – is estimated to be of order S_Universe∼ 10^104 k_B <cit.>. The dS entropy is still much larger, maybe incorporating the entropy of spacetime itself.
* Finally, it is interesting to compare it with the entropy of the dS black hole that we studied in section <ref>. In that case, the area is proportional to the horizon radius squared, and so it follows from figure <ref> that it increases until reaching the largest value that corresponds to the Nariai black hole. As previously mentioned, the horizon radius of the Nariai black hole is r_Nariai = ℓ/√(3), which gives an entropy of
S_Nariai = πℓ^2/3 G_N = S_GH/3 < S_GH .
Note that even if one considers the total entropy of SdS as the sum of the areas of both the cosmological and the black hole horizons, this is still less than the entropy of empty dS. For instance, for the Nariai case, this will give 2 S_Nariai = 2S_GH/3 < S_GH. So yet another interesting feature of dS is that the empty dS solution is the most entropic configuration among black hole solutions with positive Λ.
[Black hole entropy]prf:bhent
We can also repeat the Euclidean path integral computation to obtain the entropy formula for black holes, that is also famously given by an area formula,
S_BH = A_H/4G_N .
But even if the final result is the same, in the black hole case the computation is more subtle and, in fact, the entropy comes from a boundary term in the action. Consider the Euclidean gravitational action (with no cosmological constant),
I_E = - 1/16π G_N∫ d^4x √(g) R - 1/8π G_N∫_r=r_0 d^3x √(h) K ,
where h is the induced metric at the boundary and K is the trace of the extrinsic curvature.[A useful compendium of formulas to find formal definitions of all these terms can be found in <cit.>.] The second term is called the Gibbons-Hawking-York (GHY) boundary term and, in manifolds with boundaries, is needed for the variational principle to be well-defined <cit.>.
One saddle-point solution to this action is the Euclidean black hole metric,
ds^2 = (1- 2M/r) dt_E^2 + dr^2/(1- 2M/r) + r^2 dΩ_2^2 , r ∈ [r_h = 2M, ∞] .
As discussed, regularity at the horizon imposes that t_E ∼ t_E + 8π M. Given that for this solution R=0, the bulk term in the Einstein action vanishes on-shell. Instead, the main contributions to the gravitational path integral come from a boundary term fixed at a constant r=r_0 slice <cit.>. The details of the computation, that involve regularisation of the action, are shown in Appendix <ref>, but the final result is that the entropy is proportional to M^2, which gives the area law in (<ref>).
§.§ The first law for de Sitter space
An alternative derivation of the dS entropy can be obtained from a first law for the dS horizon, and it is analogous to the derivation of the entropy from the first law of black hole mechanics.
We want to study how the area of the cosmological horizon changes as we throw some infinitesimal energy, dM into the cosmological horizon. As there is no spatial boundary in dS, we need to be cautious in what we mean by energy, but at least infinitesimally, we will assume that the change in the horizon area due to dropping that energy is the same as that of having a small M parameter in the Schwarzschild de Sitter black hole solution (<ref>). More elaborate arguments can be found in Gibbons and Hawking original paper <cit.> and in <cit.>. We obtain that the variation in the horizon area is given by
. d A_H |_A_H = 4πℓ^2 = - 8πℓ dM →. d A_H/dM|_A_H = 4πℓ^2 = - 4/T_dS ,
where for the last equality we used the fact that the dS temperature is given by (<ref>). Assuming that the entropy is proportional to the area, we obtain a first law for the cosmological horizon,
[box=[drop lifted shadow]]equation
dM = - T_dS dS_GH ,
if we fix the proportionality factor to 1/4, confirming (<ref>).
Note that compared to the usual first law of thermodynamics, there is a minus sign in (<ref>). As we discussed in section <ref>, this is a consequence of the fact that the cosmological horizon radius decreases in size as we include some mass parameter M. Then, as the mass crosses the event horizon, the entropy increases, maybe signaling that the observer now has less information about the interior of the cosmological horizon <cit.>.
As with the black hole case, it is possible to study quasi-local thermodynamics of the de Sitter horizon with respect to data at a York boundary. For the cosmological horizon, this was first studied in <cit.>, where in the latter it is shown that the first law (<ref>) is recovered when shrinking the size of the York boundary towards the observer's worldline. See <cit.> for a recent discussion. Timelike boundaries might play an important role in defining holography in de Sitter space, so we will come back to the discussion of this problem in section <ref>.
§ GEODESICS IN DE SITTER SPACE
In this chapter, we will study geodesics in de Sitter space. Geodesics are probably the simplest extended objects that appear in General Relativity and as such, they contain useful information about the spacetime itself.
They are also important at a semi-classical level, in the context of quantum fields in fixed curved backgrounds. In this case, the geodesic length computes the logarithm of the two-point function of heavy free massive scalar fields ϕ. Schematically,
⟨ϕ(X) ϕ(Y) ⟩ = ∫ DP e^-m L[P]≈∑_g e^-m L_g ,
where the path integral is over all possible paths connecting the two points X, Y and the last expression is the saddle point approximation as the mass m goes to infinity <cit.>. L_g is the geodesic length between X and Y and if there is more than one geodesic, in principle, we should sum over them.
Finally, to finish this introduction to geodesics, it is worth mentioning that extremal surfaces play an important role in the context of the AdS/CFT correspondence. For instance, co-dimension 2 surfaces are related to entanglement entropy through the Ryu-Takayanagi prescription and its generalisations <cit.> and co-dimension 1 extremal volumes are conjectured to compute quantum complexity <cit.>. Geodesics are co-dimension 2 objects in d=3, and co-dimension 1 in d=2.
Euclidean vs. Lorentzian geodesics. In Euclidean signature, there is always a minimal path between any two points in a smooth manifold. So, there is always, at least, a minimal length geodesic. However, in general, there is no maximal path, as the infinitesimal interval, ds^2, is always positive. In Lorentzian signature, ds^2 can have either sign (or even be null), so geodesics are not minimal anymore but locally extremal curves <cit.>.
As an example, consider two spacelike separated points. Assume there is a geodesic between the two points that has minimal length. But now consider deforming the curve infinitesimally with a null zig-zag trajectory around the candidate geodesic, see figure <ref>. As the zig-zag curve will have close to zero length, it will certainly have less length than the candidate geodesic and so, the candidate curve cannot be a geodesic. The formal way of saying this is that given the set of curves between two fixed points, the length (for spacelike geodesics) or the proper time (for timelike geodesics) are upper-semi continuous. This is nicely explained in Chapter 9 of <cit.>.
As a conclusion, in Lorentzian signature, if there are geodesics, they have locally maximal length. As we will see in de Sitter, it is also possible that some points are not connected at all by geodesics <cit.>.
§.§ Lorentzian geodesics in de Sitter space
Now we can set up our computation for geodesics in dS space. Most of this section is reviewed from <cit.>. For simplicity, we will consider d=2 and we will set ℓ=1. We will use global coordinates (<ref>), so that the length between any two fixed points is given by the following functional,
L = ∫ ds = ∫ dλℒ (τ, τ̇, φ, φ̇, λ) = ∫ dλ√(-τ̇^2 + φ̇^2 cosh^2 τ) ,
where φ is the angular coordinate around the spatial circle and the dot indicates derivative with respect to λ, that, for now, is some parameter along the curve. The endpoints of the curve are two arbitrary fixed points {τ_0, φ_0} and {τ_1, φ_1}, so we can always choose λ∈ [λ_0,λ_1], such that
{[ τ(λ=λ_0) = τ_0,; φ(λ = λ_0 ) = φ_0 , ]} and {[ τ(λ=λ_1) = τ_1,; φ(λ =λ_1 ) = φ_1 . ]}
Now the problem becomes a classical mechanics problem of extremising the length functional (<ref>). We start by noting that φ(λ) does not appear explicitly in the Lagrangian, so there is a conserved quantity Q associated to it,
Q ≡∂ℒ/∂φ̇ = φ̇cosh^2 τ(-τ̇^2 + φ̇^2cosh^2 τ)^-1/2 .
Since the length functional is invariant under reparametrisation, we may select λ such that it is an affine parameter,
ℒ^2=( ds/dλ)^2 = ± 1 = - τ̇^2 + φ̇^2 cosh^2 τ ,
where the ± depends on whether the geodesic is spacelike or timelike, respectively. Note that with this convention, spacelike geodesics will have real length and timelike geodesics, imaginary. We can use (<ref>) to write (<ref>) as the well-known equation for a particle in a potential at a fixed energy, with potential V(τ) = -Q^2/2cosh^-2τ. It is a simple exercise to find the trajectories for this problem, which are given by
tan (φ + φ̃) = Q sinhτ/√(Q^2 - cosh^2 τ) ,
where φ̃ and Q act as integration constants that will be determined upon fixing the endpoints of the geodesic. Note that this formula assumes that φ is a monotonic function of τ. If there are turning points, Q must change sign at those. Let's look at some examples.
Timelike separated points. Consider two spacetime points that are at a fixed spatial angle φ_* but separated in time, X = {τ_0, φ_* } and Y = {τ_1, φ_*}. Plugging these endpoints into (<ref>) gives Q=0, and a geodesic length given by L_g = i |τ_1 - τ_0|. Note that, as mentioned, timelike geodesics in this convention have imaginary lengths.
Spacelike separated points. Consider fixing the endpoints at opposite sides of the spatial circle, X = {τ_0, φ_*} and Y = {τ_1, φ_* + π}. The only smooth solution with this choice of points has a turning point and requires that τ_0 = -τ_1. Then, there exists a one-parameter family of geodesics with |Q|>coshτ_0. All of these geodesics have the same length, that is half the circumference of the sphere with unit radius, L_g = π. This can be also seen by noting that geodesics in the embedding picture can be obtained as intersections of the hyperboloid with a plane that contains the two points under consideration and the origin of the embedding spacetime.
For any other choice of times, there are no real geodesics between the two points. In particular, if we consider τ_0 = τ_1, then geodesics only exist if τ_0 = τ_1 = 0. See figure <ref>.
It is worth noting that a similar feature occurs in the double-sided eternal AdS black hole in dimensions higher than 3, where real spacelike geodesics anchored at opposite boundaries stop existing at a finite time, encoding some signatures from the black hole singularity <cit.>.
[Spacelike geodesics in AdS]theo:theo3
It is possible to do a completely analogous computation but now in the AdS black hole. In two dimensions, the computation is pretty straightforward, because the geodesics are those of global AdS_2 <cit.>. One difference with respect to the dS case is that the geodesic length diverges close to the conformal boundary, so it needs regularisation. But most importantly, if we fix the same times at each boundary, let's call it t, it can be shown that the geodesic length is given by
L_g^AdS (t) = 2 log( 2 R_b cosh t/2 ) + O(1/R_b^2) ≈ 2 log R_b + |t| + ⋯ ,
where R_b ≫ 1 is a large cutoff in units of the AdS scale. Note that this formula is valid for arbitrarily long times, where the length changes linearly with time, making it compatible with the holographic complexity prediction <cit.>. The geodesics are shown in figure <ref> where we contrast them with the dS ones.
Generic points. The computation can be further generalised to arbitrary points in dS_2. Consider two points X = {τ_0, φ_0 } and Y = {τ_1, φ_1 }. We do not attempt here to provide a full derivation, but it can be shown that whenever a geodesic exists, its length is given by
L_g = arccos( coshτ_0 coshτ_1 cos (φ_1 - φ_0) - sinhτ_0 sinhτ_1 ) .
But given a particular pair of points in dS, how do we know if geodesics exist at all? It turns out that a better diagnostic of the distance between two points in de Sitter is what is inside the argument of the arccos in (<ref>).
This quantity is well-defined for any two points in dS and it can be further generalised to higher dimensions. The easiest way to discuss it is in embedding space, see chapter <ref>, where it can be defined in a coordinate invariant way. We call this quantity P_X,Y, the de Sitter invariant length between two points X, Y and for d-dimensional dS space, it is defined as
P_X,Y≡η_IJ X^I Y^J , η_IJ = diag(-1, 1, …, 1)_d + 1 ,
where X^I and Y^J are the coordinates in (d+1)-Minkowski space of any two points in the dS hyperboloid. It is clear by looking at (<ref>) that P_X,Y is a real quantity that can go from -∞ to ∞. Depending on the relative position between the two points, we have that
P_X,Y >1 , for timelike separated points ;
P_X,Y =1 , for coincident or null separated points ;
P_X,Y <1 , for spacelike points ;
P_X,Y =-1 , when X is null separated from the antipodal point of Y ;
P_X,Y <-1 , when X is timelike separated from the antipodal point of Y .
To answer the question we started with, geodesics in dS_d only exist when P_X,Y≥ -1. Coming back to the example where we considered opposite points on the spatial circle at the same global time τ, we find that P_X,Y = - coshτ, so it becomes manifest that real geodesics only exist at τ=0 (which gives L_g = arccos (-1) = π, as we found).
§.§ Euclidean geodesics in de Sitter space
As previously discussed, in Euclidean signature there is no issue with finding geodesics between any two points. In section <ref>, we discussed that the analytic continuation of the time coordinate maps dS_d space into the sphere, S^d. Here, the problem of finding geodesics is famously solved, between any two points there exists a great circle that connects them. Both segments connecting the two points along the great circle are geodesics and the one with minimal length is the shortest path between the two points. See figure <ref>. There are also infinitely many other geodesics that wrap around the great circle.
We can now define our invariant distance in Euclidean space, P_X,Y^E = δ_IJ X^I Y^J, that is simply related to the geodesic length by P_X,Y^E = cos L_g. In two dimensions, we can use the coordinates in (<ref>), taking the angular coordinate φ to parameterise the spatial circle, to show that,
P_X,Y^E = cosτ_E,0cosτ_E,1cos (φ_1 - φ_0) - sinτ_E,0sinτ_E,1 .
Note that -1 ≤ P_X,Y^E ≤ 1, so the geodesic length L_g is always well-defined, and that upon analytically continuing back to Lorentzian time, P_X,Y^E → P_X,Y. This fact (and other subtleties of the analytic continuation) was recently used in <cit.> to understand the geodesic approximation (<ref>) in the Lorentzian cases where no real geodesics exist.
§ SHOCKWAVES IN DE SITTER SPACE
So far we have studied different solutions – pure dS, Schwarzschild dS, Nariai – to the Einstein equations with a positive cosmological constant and no matter content. In this chapter, we will study the (possibly) simplest solutions with a non-vanishing stress tensor, and we will explore their consequences as a particular case of what is known as Gao-Wald theorem <cit.>.
Consider solutions to Einstein equations with a positive cosmological constant sourced by some matter stress tensor T_μν, that we will specify shortly,
R_μν - 1/2 R g_μν + Λ g_μν = 8 π G_N T_μν .
For now, we can work in any spacetime dimensions. We are interested in massless sources that move along null directions, so it will be useful to consider null coordinates like the ones presented in (<ref>). For completeness, we remind ourselves that in the absence of matter, the pure dS solution in these coordinates is given by
ds^2 = -4 dUdV + (1+UV)^2 dΩ_d-2^2/(1- UV)^2 .
We want to study how the geometry changes when we insert matter in the form of a shockwave. The shockwave stress tensor that we will consider simply takes the form of a delta function in one of the null directions,
T_μν = (d-2)/4π G_Nα δ(U) δ^U_μδ^U_ν .
To satisfy the null energy condition, we require α > 0. Note that this shock is localised in the U-direction, but not in the compact directions, so it is basically a spherically symmetric shell of null matter with the size of the cosmological horizon. Given the divergence of the stress tensor at U=0, one might find it surprising that there exist analytic solutions sourced by a delta function stress tensor. However, shockwave solutions have been known for some time. In asymptotically flat spacetimes, exact shockwave solutions were first discussed in <cit.>. The generalisation to the case with positive cosmological constant was discussed first by Hotta and Tanaka <cit.>, and Sfetsos. <cit.>.
Shockwave solutions can be obtained by noting that away from U=0, the solution has to be a solution of the vacuum Einstein equations. For now, we will consider the case where both for U>0 and U<0, the spacetime is pure de Sitter. The matching should be such that the Einstein equations are satisfied at the junction <cit.>. The solution after the shock looks like a shift in the V-coordinate,
V_U>0 = V_U<0 - α .
Using the metric in (<ref>), it is possible to show that the shockwave metric is given by,
ds^2 = -4 dUdV + (1+U(V-αθ(U))^2 dΩ_d-2^2/(1- U(V-αθ(U))^2 ,
where θ is the Heaviside function. This is called the Rosen form of the metric. Sometimes it is more convenient to use discontinuous coordinates u = U, v = V - αθ(U), where the metric takes the more standard shockwave form,
ds^2 = -4/(1- uv)^2 dudv - 4 αδ(u) du^2 + (1+uv)^2/(1- uv)^2 dΩ_d-2^2 .
It is easy to check that this metric satisfies the Einstein equations sourced by stress tensor (<ref>).
Note that the sign in the term proportional to α in (<ref>) is negative, as opposed to the case of flat or AdS shockwaves – see box <ref> for the AdS solution. Because of this, a light particle passing near the shock will experience a Schapiro time advance, making it possible to access a region that was causally inaccessible before. See figure <ref>. This is again in clear contrast with shockwave solutions in flat or AdS spacetimes, where the particle would experience a time delay and will become even more disconnected from the other side. In fact, in those cases, the only way to make this wormhole traversable is by violating the null energy condition, setting α<0. An example of this kind in AdS is the so-called Gao-Jafferis-Wall wormhole <cit.>.
The solution (<ref>) has some interesting features. For instance, the solution is pure dS space (with the same dS length) on both sides of the shock. In between there is a positive energy, spherical shock with the same size as the dS horizon. This exists at the horizon from ℐ^- all the way to ℐ^+.
It would be interesting to find other shockwave solutions in dS. For instance, with shocks sent at a finite time. It would also be desirable to have dS analogues to the multiple shocks solutions in AdS <cit.>, or more localised shockwaves. Lower dimensional spacetimes might provide an interesting avenue to explore these generalisations.
[Shockwaves in AdS black holes]theo:ads_shocks
A very similar computation can be done for black holes in AdS_3 <cit.>. The solution to (<ref>) with stress tensor (<ref>) but now with negative cosmological constant is given by
ds^2 = - 4/(1+ uv)^2 dudv + 4 αδ(u) du^2 + (1-uv)^2/(1 + uv)^2 dφ^2 .
In this coordinate system, the AdS boundaries are at uv = -1 and the singularities at uv=1. Note that the sign in the term with α has flipped, generating the aforementioned Schapiro time delay.
We can also plot the Penrose diagram in the AdS black hole case that, as shown in figure <ref>, does not become taller as in the dS case, but wider, making it even harder for disconnected observers to communicate.
Finally, it is worth mentioning that the effect presented is not particular to shockwaves, but a more general statement about perturbations of dS. In fact, it is proven that, in dS, all perturbations obeying the null energy condition and the null generic condition, will make the dS Penrose diagram taller, allowing for previously disconnected regions of spacetime to become causally connected. See, for instance, <cit.>. The formal statement goes under the name of Gao-Wald theorem <cit.> and states that, for solutions obeying the null energy condition and null generic energy condition, a null geodesically complete and globally hyperbolic spacetime with compact Cauchy surfaces cannot exhibit a particle horizon. In this case, sufficiently late-time observers will see the full Cauchy slice in their past light cone.
Regarding holography, in the case of asymptotically AdS spaces, shockwave solutions have played an important role in diagnosing the quantum chaotic nature of gravity. In fact, they are an essential component in the computation of out-of-time-ordered correlators that led to maximal chaos in AdS black holes <cit.>. Given that shockwaves in dS behave in a different (almost opposite) way, it would be interesting to understand the consequences of this for a putative holographic theory for dS. We will discuss more about this in chapter <ref>.
§ TOWARDS HOLOGRAPHY IN DE SITTER SPACE
Having described some of the main (semi-)classical features of dS space, in this section we will review some recent advances made in understanding quantum features of dS space. As already noted, there is still not a unified framework as in the case of the AdS/CFT correspondence, but in this Chapter we discuss some of the recent attempts to define holography in dS.
We start with a short discussion in section <ref> about quantum field theory in a fixed dS background, which has gained renewed attention recently. When it comes to holography, the first question that appears is how to think about the boundary theory. In dS, the only asymptotic boundaries are at ℐ^±, that are spacelike. The initial proposals for holography in dS considered a Euclidean CFT living at those boundaries. This is the idea behind what is called the dS/CFT correspondence, which we also briefly review in <ref>.
Most of the discussion in this Chapter will be centered around new developments towards a more local version of dS holography, which we discuss in section <ref>. We discuss some static and dynamical features of the cosmological horizon and compare them with results in holographic studies of black holes. In this observer's approach to holography in dS, it seems that timelike boundaries might play an important role, so we finish this chapter by discussing some relevant aspects of timelike boundaries that depend on the number of spacetime dimensions under consideration.
§.§ QFT in de Sitter space and the dS/CFT correspondence
A first step towards a full quantum gravity theory in dS is to study quantum field theory in a fixed dS background. Given that in dS there is no global timelike Killing vector, even the notation of a vacuum state needs to be considered carefully, as, for instance, particle number is not conserved.
However, there is in dS one particular state that is called the Bunch-Davies or Euclidean state that has very interesting properties <cit.>. This state is invariant under the dS isometries and correlation functions exhibit no singularities except at coincident (or null separated) points, where the behaviour mimics the singularities in flat space. Moreover, it is the state that can be obtained from analytic continuation from the sphere, which is the reason behind its name.
As the simplest example, we will briefly study free massive scalar field theory on a fixed dS background. The action for this theory is given by
I_ϕ = 1/2∫ d^d x √(-g)( g^μν∂_μϕ∂_νϕ + m^2 ϕ^2 ) ,
where m is the mass of the scalar field ϕ and g_μν is the d-dimensional dS metric. This theory has been extensively reviewed in, for instance, <cit.>. In the Euclidean vacuum state |E⟩, the Wightman two-point correlator is known analytically and is given by
⟨ E | ϕ(X) ϕ(Y) | E ⟩ = Γ (Δ) Γ (Δ̅)/(4π)^d/2Γ(d/2) _2F_1 ( Δ, Δ̅; d/2; 1+P_X,Y/2) ,
where _2F_1 is the hypergeometric function and the correlator only depends on P_X,Y, that is the dS invariant length, see section <ref>. It is clear, then, that this correlator is invariant under the dS isometries. The Δ's are called scaling dimensions and are given in terms of the mass and the spacetime dimension as
Δ = ( d-1/2) + √(( d-1/2)^2 - m^2) , Δ̅ = d-Δ ,
where Δ̅ is the shadow dimension. Remember here the mass is given in units of the dS radius ℓ. These are very similar to the AdS scaling dimensions, but differ on the minus sign inside the root, which, in AdS, makes all the Δ≥ 0. On the contrary, in dS, we can distinguish, in principle, two types of fields: light fields with m < (d-1)/2 have real Δ, while heavy fields with m > (d-1)/2 have complex scaling dimensions.
These complex scaling dimensions were first thought of as signals of non-unitarity in dS. However, we know they are part of the Unitary Irreducible Representations (UIRs) of the dS group SO(d,1). These have been studied a long time ago <cit.>, but in the last few years, there has been a renewed interest in the subject, see <cit.> for general d and <cit.> for d=2. We will not go into details of these in the present lecture notes, but refer the interested reader to <cit.> for a pedagogical introduction to the subject. In general, the UIRs depend on the spacetime dimension, Δ, and the spin. As an example, the spinless representations of the dS group in d=4 are as follows:[Here Δ is a label for the eigenvalue of the quadratic Casimir of the group, that in the spinless case is given by Δ (Δ-(d-1)).]
* Principal series: Δ = 3/2 + i ν, with ν∈ℝ, corresponding to heavy fields in dS.
* Complementary series: 0 < Δ < 3, corresponding to light fields in dS.
* There is also an exceptional series for integer values of Δ, that in d=4 coincides with the discrete series. The interpretation of these series in terms of particles in dS is not completely understood.
Recently, UIRs have been used as a powerful tool to compute cosmological correlators at late times in what is known as the cosmological bootstrap programme. The aim of this programme is to construct cosmological correlators using physical principles such as locality, unitarity and symmetries. See <cit.> for recent developments and references.
So far we have only considered QFT on a fixed dS background. A full holographic description of dS has to allow for spacetime fluctuations. As soon as the AdS/CFT correspondence was formulated, ideas on how to incorporate cosmology into that framework appeared. See, for instance, <cit.>. Eventually, they converged into what is now known as the dS/CFT correspondence <cit.>.
The basic idea is that in analogy to the AdS/CFT correspondence, there is a dual (Euclidean) conformal field theory that lives in the asymptotic future of dS. In this case, the partition function of the boundary CFT is dual to the wavefunction of the Universe Ψ_dS,
Ψ_dS [ φ_0 ] ↔ Z_CFT [φ_0 ] ,
where φ_0 is some asymptotic profile for bulk fields φ, including the metric, in the Bunch-Davies (or Hartle-Hawking <cit.>) state. We refer the reader to interesting progress in <cit.>.
A concrete model for this conjecture was found in <cit.>, where a particular higher spin theory on dS was conjectured to be dual to a simple conformal field theory that is just a theory of N free scalar fields that transform as a Sp(N) vector. The tower of higher spin fields in the bulk are then dual to the higher spin conserved currents of the boundary theory. Generalisations of this model can be found in <cit.>. The analogous version in two-dimensional JT gravity has been explored in <cit.>, while similar ideas in three dimensions are discussed in <cit.>. Another set of interesting ideas relating cosmology to the standard AdS/CFT correspondence by using RG flows and analyticity followed from work initiated in <cit.>.
§.§ Static patch holography and timelike boundaries
One drawback of the global approach is that it is harder to probe the cosmological horizon and its features (some of which we discussed in previous chapters). Given the success of the holographic framework in describing black hole horizons, we might hope that some of the tools used in that case might serve useful to characterise the cosmological horizon. Using holography, we came to the idea that black holes are highly-entropic, dissipative, strongly-coupled, maximally chaotic states in their quantum description. It would be desirable to understand which of these properties (if any) are still present in the cosmological horizon case.
For this, it is necessary to focus our holographic constructions inside the static patch region of dS. This might be, in principle, problematic since there is no asymptotic boundary inside the static patch. Then, the most natural places to put the holographic theory are either the observer's worldline or a stretched horizon, a timelike surface of constant radius close to the cosmological horizon. We will call these two approaches worldline holography and stretched horizon holography, respectively.
In any case, at the static level, the first important feature to understand is the quantum origin of the Gibbons-Hawking entropy for dS. Putting together the finite dS entropy and the fact that black holes cannot have arbitrarily large mass in dS, see section <ref>, it was suggested that the Hilbert space of a quantum dS theory should be finite dimensional <cit.>. In part to better understand this proposal, corrections to the Gibbons-Hawking entropy have been computed using Euclidean path integral methods. For example, it was shown <cit.> that in pure gravity in d=4,
log Z = S_GH - 5 log S_GH - 571/90logℓ/ℓ_ref + 𝒪(1) .
The leading term is, of course, the Gibbons-Hawking entropy. Then there are two types of logarithmic corrections. The difference between them is that the second term needs an effective theory scale ℓ_ref. Some of these contributions can be interpreted as entanglement entropy for gravitational edge modes <cit.>. Similar expressions have been studied in d=3, where there exists an all-loop calculation <cit.> and in d=2, where a non-perturbative formula in timelike Liouville theory has been conjectured <cit.>.
The observation that subleading terms in the Gibbons-Hawking entropy look like entanglement entropy suggests that it might be interesting to consider entanglement entropy in dS holographically <cit.>, extending the Ryu-Takayanagi prescription (and its generalisations) <cit.>. It was noted that the bifurcation surface in dS is a minimax surface, instead of a maximin (as in AdS black holes) <cit.>. This suggests that in order to get similar results to those obtained in standard AdS holography, one should anchor extremal surfaces at the (stretched) horizon. See other recent proposals in <cit.>. Quantum extremal surfaces in the context of dS have also been actively studied recently <cit.>, but it is fair to say that compared to the black hole case (see <cit.> for current state of affairs), the case of the cosmological horizon is still pretty much under development.
A next step to characterise the horizon is to move slightly away from equilibrium. A natural probe is to consider quasinormal mode behaviour of the cosmological horizon. The scalar quasinormal frequencies are given by <cit.>,
i ω_n ℓ =
(l + 2n + Δ)
(l + 2n + Δ̅) , n ∈ℕ_0 ,
where Δ, Δ̅ are given in (<ref>). An interesting observation is that as the mass of the perturbation increases, then it is the real part of the quasinormal frequency that increases. This is in contrast with the black hole case, where an increase in the mass translates into a more dissipative frequency with a larger negative imaginary part.
Other out of equilibrium considerations, such as questions of quantum chaos and scrambling in dS have also been recently under debate. Historically, the existence of a horizon in de Sitter space (and the associated Rindler behaviour close to it) motivated the conjecture that quantum dS is a fast scrambler <cit.>, in the sense that the scrambling time t_* would scale as t_* ∼βlog S_GH, as happens in black holes.
However, this does not seem to be the prevailing interpretation these days. In quantum systems with a large number N of degrees of freedom, one diagnostic of quantum chaos is the four-point out-of-time-ordered correlator (OTOC), that in chaotic systems schematically behaves as
⟨ OTOC ⟩_β (t) = f_1 - f_2/Nexpλ_L t + ⋯ ,
where f_1,2 are order one positive numbers and λ_L is known as the Lyapunov exponent. A bound on quantum chaos implies that λ_L ≤ 2π/β <cit.>. A shockwave computation in an AdS black hole background shows that AdS black holes saturate this bound <cit.>. The analogous shockwave computation in dS gives f_2 <0, due to the Gao-Wald effect <cit.>. Moreover, the rapid approach of spacelike geodesics to future and past infinities in dS, see section <ref>, has led to the conjecture that actually the scrambling time in dS reduces to zero, so it has been called in <cit.> a hyperfast scrambler. There, it was also suggested that the double scaled Sachdev-Ye-Kitaev model <cit.> might have this feature. Several works have tried to make this relation more precise <cit.>; however, at the time of writing, the status of chaos and complexity in dS remains inconclusive <cit.>. A holographic computation of the OTOC in a geometry with a cosmological horizon has been done in two dimensions and gives a correlator that does not grow exponentially but oscillates in time <cit.>, pointing toward a more organised nature of the quantum theory.
Having at least simple toy models of quantum theories for dS might help us understand these macroscopic puzzles. So what do we know about the quantum theories?
§.§.§ The role of the observer
The role played by the observer in holography of the dS static patch has been emphasised in <cit.>, where it was shown that correlators along the observer's worldline were controlled by SL(2,ℝ) symmetries. This is compatible with the idea that there is a holographic large N, conformal worldline quantum mechanical dual.
From the point of view of algebraic quantum field theory, it has recently been noted that, in the presence of gravity, it might be convenient to describe a theory in terms of the algebra of observables along the timelike worldline of an observer. Moreover, in a gravitational system that is closed, like dS, it has been shown that it is essential to explicitly incorporate the observer in the analysis to obtain well-defined notions of entropy <cit.>.
Incorporating the observer makes it possible to define a trace (from which we can define entropies) and in dS, it generates a von Neumann algebra of Type II_1 (at least in the limit of G_N → 0). We will not discuss von Neumann algebras in these lectures, but we refer the reader to <cit.> for a pedagogical introduction. Instead, we will just point out a few properties that might resonate with some ideas already discussed in these notes.
This type of algebras has a state of maximum entropy. Indeed we have seen that, at least compared to other black hole solutions, empty dS space is the solution with maximum entropy, see section <ref>. All other black hole solutions will have less entropy than the empty dS one. This maximum entropy state will have a density matrix ρ that can be normalised so that ρ = 1. Then, all entanglement eigenvalues are equal, so it is said to have a “flat entanglement spectrum” or to be a “maximally mixed” state. These properties hold perturbatively in G_N.
A pressing question is how to incorporate these abstract ideas into a more standard holographic framework. It turns out that the answers might depend on the spacetime dimension under consideration. In the following, we will consider different approaches to static patch holography, starting with some ideas for higher dimensional gravity and then, discussing some particular proposals in d=3,2.
§.§.§ d≥4: conformal boundary conditions
An initial strategy to do holography inside the static patch could be to put an artificial boundary à la York and describe the system in terms of the now fixed boundary data. As mentioned, inside the static patch this problem was studied in <cit.>. Recently, it has been thoroughly extended (including SdS black holes) in <cit.>.
Quite remarkably, in parallel to these developments, there have been recent advances in the mathematical relativity community involving the Initial Boundary Value Problem (IBVP) in General Relativity <cit.>. Consider a
d-dimensional manifold of the form ℳ = I × S, with I being a finite interval and S, a spatial (d-1)-manifold with non-empty boundary ∂ S = Σ. The boundary of ℳ is denoted 𝒞 = I ×Σ. Now the IBVP consists on finding metrics on ℳ that are solutions to the vacuum Einstein equations,
R_μν - 1/2 R g_μν + Λ g_μν = 0 ,
together with boundary conditions along 𝒞 and initial conditions along a Cauchy surface, say at some initial slice. See <cit.> for a review on the subject. Here, it is important to have dynamical gravity, so d≥ 4. The inclusion (or not) of a cosmological constant does not change the results we will be reviewing below.
The statement in <cit.> is that with respect to either Dirichlet (fixing the induced metric on 𝒞) or Neumann (fixing the second fundamental form of 𝒞 in ℳ) the IBVP for General Relativity is not well-posed.
This statement comes in the form of two theorems. First, it is proven that given a generic set of boundary data, the constraint of Einstein equations at the boundary will not be satisfied (this is a gauge-independent proposition). Second, given a consistent set of boundary data and a solution which solves the IBVP, then there exist other infinitely many physically distinguishable solutions on ℳ (this is proven in the harmonic, or de Donder gauge). The Euclidean analogue of this problem is reviewed in <cit.>, with similar conclusions.
Note this is unlike other IBVPs that are known to be well-posed. Examples include scalar, Maxwell or Yang-Mills theory. It is also contrary to the Initial Value (or Cauchy) Problem in General Relativity that is known to be well-posed since the 60's <cit.>.
One should find this problematic, specially if one wants to give a thermodynamic interpretation to the (Euclidean) gravitational path integral. Fortunately, there is another set of geometric data that is conjectured (not proven) to be well-posed. These are the so-called conformal boundary conditions, that are mixed conditions that involve fixing
Conformal boundary conditions: {[ h], K } ,
where [ h] is the conformal class of boundary metrics and K is the trace of the extrinsic curvature at the boundary. There is a proof of the IBVP with conformal boundary conditions being well-posed in Euclidean signature <cit.> and evidence for it in Lorentzian signature <cit.>.
It seems essential to understand the gravitational path integral in terms of conformal boundary data to make sense not only of static patch holography but also, of some other finite versions of holography, such as finite cutoff AdS <cit.>. Finally, it is interesting to note that these conformal boundary conditions have been anticipated in <cit.>, where obstructions found with standard Dirichlet boundary conditions were bypassed by imposing conformal boundary conditions instead.
§.§.§ d=3: TT̅ + Λ_2 deformations
The theorems discussed in the previous section require dynamical gravity in the bulk. In lower dimensions, there have also been interesting recent developments regarding timelike boundaries in dS. In particular, in d=3, there have been recent constructions involving certain solvable irrelevant deformations of CFT_2 that allow us to count the dS entropy, including its leading logarithmic correction, from certain AdS black hole microstates <cit.>.
The idea is to start from some seed CFT and deform it by two types of irrelevant deformations. At any step in the process, the Euclidean theory is defined through a differential equation,
∂/∂λlog Z(λ, g) = -2π∫ d^2 x √(g)⟨ T T̅⟩ + 1-η/2πλ^2∫ d^2 x √(g) .
Here Z(λ, g) is the partition function of the theory and depends on the background metric g and the deformation parameter λ. TT̅ is a quadratic combination of the stress tensor defined by TT̅≡1/8(T_abT^ab - (T_a^a)^2 ) <cit.>. The last term corresponds to changing the cosmological constant in two dimensions and is usually called a Λ_2 deformation.
The idea is to start the process with some (holographic) CFT, so the appropriate boundary condition will be Z_λ=0, η=1 = Z_CFT. We focus on the energy levels around Δ≈ c/6, with c ≫ 1 being the central charge of the CFT. Now the process is taken in steps. First, we deform the CFT with a pure TT̅ deformation (η = 1). At some finite value of λ = λ_0, we turn on the Λ_2 deformation by setting η = -1. This deformation is solvable and it is possible to compute the energy spectrum at each step of the process.
On the gravity side, the dual of this process is as follows. We start with a BTZ black hole. The TT̅ deformation brings the AdS boundary very close to the black hole horizon. At this point, the black hole horizon becomes indistinguishable from a cosmological horizon with the same radius. This is when the second TT̅ + Λ_2 deformation is turned on, building the static patch geometry in dS, from the horizon, up to a timelike boundary. This is called the cosmic horizon (CH) patch. See figure <ref>.
Through this process, the dressed black hole states can be shown to contribute to the dS entropy, obtaining that
S_TT̅+Λ_2 = S_GH - 3 log (S_GH) + ⋯ ,
which, remarkably, reproduces not only the leading Gibbons-Hawking entropy but also the logarithmic correction found in <cit.>.
Initially, this construction was proposed in the context of the dS/dS correspondence <cit.>, where the dS patch was reconstructed. See section <ref>. It is also possible to start with states around Δ = 0 and in that case, the construction does not reproduce the Gibbons-Hawking entropy. Instead, in the gravity picture, it describes a region with no horizon, that goes from the timelike boundary towards the observer's worldline. Maybe a combination of both approaches might provide a full global dS picture <cit.>. Moreover, recently the possibility of embedding this formalism in string theory has been explored <cit.>. The TT̅ deformation has also been used with potential applications to the dS/CFT correspondence <cit.>. Finally, it would be interesting to include matter in these deformed theories and try to probe some dynamical features of the cosmological horizon, such as the Gao-Wald effect, as a non-trivial check of this proposal.
§.§.§ d=2: open quantum mechanical duals
Two dimensions is yet another avenue to explore holographic features of dS, as they offer some extra advantages compared to higher dimensions. First, following the standard holographic dictionary, the dual theory should be a (maybe disordered) quantum mechanical theory, which is easier to deal with than the usual holographic quantum field theories. Moreover, from the gravity perspective, there are dilaton-gravity theories that admit solutions where the cosmological horizon is in causal contact with an AdS boundary, where the observer is under control. In higher dimensions, this would be forbidden by the null energy condition <cit.>.
The action for a generic dilaton-gravity (Euclidean) theory in two dimensions is given by
I_E = - 1/16π G_N∫_ℳ d^2x √(g)( Φ R + U(Φ) ) - 1/8π G_N∫_∂ℳ du √(h)Φ_b K ,
where Φ is the dilaton field, and Φ_b is the value of Φ at ∂ℳ. U(Φ) is usually called the dilaton potential. We have chosen a frame in which the action contains no Φ derivatives.[Other choices containing terms with derivatives of the dilaton that also admit solutions with dS interiors can be found in, for instance, <cit.>.] It is easy to see that solutions to this action obey R = -U'(Φ), so the usual JT gravity is the particular case where U(Φ) = 2Φ, and dS can be obtained whenever U(Φ) = -2Φ. Quasi-local thermodynamics in dS JT gravity have been recently explored in <cit.>.
Geometries that interpolate between a dS horizon and an AdS boundary can be obtained as solutions to smooth potentials that asymptotically behave as U(Φ) = 2|Φ|. These are usually called centaur geometries <cit.>, and allow us to probe the cosmological horizon using the standard tools of AdS holography. More generic forms of U(Φ) have been extensively studied in, for instance, <cit.>.
It is clear, then, that the problem of quantum dS in two dimensions becomes the problem of how to microscopically modify the dilaton potential U(Φ) away from AdS. One possibility is to use the equivalence of JT gravity with a matrix model <cit.>. The dilaton potential can be deformed away from linear with the inclusion of conical defects <cit.>. It seems plausible that this construction might be able to accommodate at least a finite piece of dS spacetime <cit.>.
Another model that has resemblance to JT gravity at low energies is the Sachdev-Ye-Kitaev (SYK) model <cit.>. Following the idea of holographic renormalisation group flow <cit.>, deformations away from AdS_2 in the gravity picture would correspond to relevant deformations in the quantum side. Interestingly, the SYK model has a large set of tractable relevant deformations whose Hamiltonian takes the form <cit.>,
H = H_SYK^q + ∑_i λ_i H_SYK^q̃_i .
Here the full Hamiltonian is the sum of multiple SYK Hamiltonians with a different number q, q̃_i of fermions in the all-to-all interactions. If q̃_i<q, then the second term acts as a relevant deformation that is controlled by the dimensionless couplings λ_i. Even though unitarity requires λ_i ∈ℝ, there is evidence that allowing for complex couplings might give rise to macroscopic features consistent with dS space <cit.>. If this is correct, it would imply that the microscopic theory for the static patch might be non-unitary and should be treated as a holographic open quantum system. We have seen during these lecture notes some behaviour compatible with this. See, for instance, the shockwave solution in chapter <ref>. Recent progress in understanding non-unitary generalisations of the SYK model can be found in <cit.>. As conformal symmetry in the SYK model is only emergent in the large N limit, it is tempting to think that dS unitarity might also be an emergent property from the microscopic theory.
0.5 0.5pt
Given all the recent developments and new ideas discussed in this Chapter, we would like to finish these notes by updating exercise 4 in <cit.> into a form that is still quite challenging but hopefully more achievable these days:
[box=[drop lifted shadow]]gather*
Independently of the number of dimensions, matter content, or type of holography,
find a microscopic quantum theory dual to, at least, a patch of de Sitter space.
I would like to thank all the organisers and participants of the XVIII Modave Summer School in Mathematical Physics for a very stimulating environment. I would also like to thank D. Anninos, S. Chapman, E. Harris, C. Maneerat, D. Pardo Santos, B. Pethybridge, A. Rios Fukelman, and S. Sheorey for interesting discussions and feedback on the manuscript. Special thanks to C. Maneerat for creating most of the plots presented in these notes. I acknowledge valuable discussions with participants from the workshop "Quantum de Sitter Universe" held at DAMTP, Cambridge and funded by the Gravity Theory Trust and the Centre for Theoretical Cosmology. My work is funded by UKRI Stephen Hawking Research Fellowship “Quantum Emergence of an Expanding Universe".
§ BLACK HOLE ENTROPY
In this appendix, we provide more details on the computation of the black hole entropy from the Euclidean path integral, see box <ref>. The reason to do so is twofold. On the one hand, it is to contrast the difference in the calculation of the entropy of a black hole and a cosmological horizon. In the latter, there is no need to include either boundary terms or regularisation to compute the path integral at the dS saddle point. On the other, we will show the use of the York timelike boundary in the computation as a way of properly defining the thermodynamical ensemble. Similar ideas might play a role in dS holography as described in section <ref>.
But let's go back to the calculation. We are in Euclidean signature in four dimensions and we want to evaluate the path integral,
Z = ∫ Dg exp (-I_E[g]) , I_E = - 1/16π G_N∫ d^4x √(g) R - 1/8π G_N∫_r=r_0 d^3x √(h) K ,
in the saddle-point approximation as G_N → 0. We already included a boundary term at a fixed r=r_0, from which we will specify the boundary data. One saddle point is the Euclidean black hole,
ds^2 = (1- 2M/r) dt_E^2 + dr^2/(1- 2M/r) + r^2 dΩ_2^2 ,
that, as discussed, has t_E ∼ t_E + 8π M. As in <cit.>, we will not directly identify 8π M with the inverse temperature. Instead we will specify our thermodynamical variables at the boundary. In this case, the boundary at r=r_0 is S^1 × S^2, so we will use the proper length of the S^1 and the area of the S^2 as our independent variables,
{[ β (r_0) = ∫_0^8π M dτ_E (1-2M/r_0)^1/2 = 8π M (1-2M/r_0)^1/2,; A(r_0) = ∫ dΩ_2 r_0^2 = 4π r_0^2 . ].
Now β(r_0) is the local temperature that an observer feels at r_0. As r_0 →∞, we recover the usual statement that β_∞ = 8π M.
The bulk action is zero but the boundary term is non-vanishing. Evaluating the action on the solution (<ref>) gives the on-shell action,
I_E^on-shell = 12 π/G_N M^2 - 8π/G_N M r_0 ,
that diverges as r_0 →∞. The way of regulating the computation is by subtracting a boundary counterterm in the action. What York proposes <cit.> is to subtract the same action I_E, but now evaluated on a flat metric with the same S^1 × S^2 boundary as in the black hole metric,
ds^2_subtract = dτ^2 + dr^2 + r^2 dΩ_2^2 ,
with τ∼τ + β. Again the bulk action vanishes and using that K = -2/r_0 at the boundary, we obtain
I_E^subtract= - β (r_0) r_0/G_N = - 8π/G_N M r_0 (1-2M/r_0)^1/2 .
Then,
I_E^finite = I_E^on-shell - I_E^subtract = 12 π/G_N M^2 - 8π/G_N M r_0 ( 1- (1-2M/r_0)^1/2) ,
which is now finite as r_0→∞. For general metrics, this regularisation procedure is not completely understood, except in the context of asymptotically AdS spacetimes, where a more sophisticated version of counterterm subtraction goes under the name of holographic renormalisation <cit.>.
In order to compute thermodynamic quantities, we need to write I_E^finite not in terms of M, but in terms of β, while keeping the area A fixed. We identify I_E^finite as β F, the free energy of the system. Then, in order to compute the entropy one needs to do,
S = . (1- β∂_β)|_A (-I_E^finite) = 4π M^2/G_N = A_H/4G_N ,
which is exactly the Bekenstein-Hawking entropy for the black hole. Note, first, that as opposed to the Gibbons-Hawking calculation of the dS entropy, here the area term comes from a boundary contribution to the action. Second, even though we needed regularisation of the free energy, given that the counterterm was proportional to β, it does not contribute to the entropy.
Finally, we can also compute the specific heat,
C = . ( β^2 ∂_β^2 ) |_A (-I_E^finite) = -8π M^2/G_N1-2M/r_0/1-3M/r_0 ,
which, interestingly, can be positive for 2M < r_0 < 3M, in contrast to the negative specific heat in the standard calculation when r_0 →∞. This is an interesting effect that appears in the presence of finite timelike boundaries. As discussed in section <ref>, it has been recently proven that fixing the induced metric on the boundary does not lead to a well-posed problem <cit.>, so it would be desirable to revisit this calculation in the light of the new conjectured boundary conditions that do lead to well-posedness, fixing the conformal class of the metric at the boundary and the trace of the extrinsic curvature.
JHEP
|
http://arxiv.org/abs/2306.08977v1
|
20230615091654
|
Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain
|
[
"Zhuozhu Jian",
"Zejia Liu",
"Haoyu Shao",
"Xueqian Wang",
"Xinlei Chen",
"Bin Liang"
] |
cs.RO
|
[
"cs.RO"
] |
Discovery potential for axions in Hamburg
A. Ringwald
July 31, 2023
==========================================
Wheeled robot navigation has been widely used in urban environments, but little research has been conducted on its navigation in wild vegetation. External sensors (LiDAR, camera etc.) are often used to construct point cloud map of the surrounding environment, however, the supporting rigid ground used for travelling cannot be detected due to the occlusion of vegetation. This often causes unsafe or not smooth path during planning process. To address the drawback, we propose the PE-RRT* algorithm, which effectively combines a novel support plane estimation method and sampling algorithm to generate real-time feasible and safe path in vegetation environments. In order to accurately estimate the support plane, we combine external perception and proprioception, and use Multivariate Gaussian Processe Regression (MV-GPR) to estimate the terrain at the sampling nodes. We build a physical experimental platform and conduct experiments in different outdoor environments. Experimental results show that our method has high safety, robustness and generalization.
The source code is released for the reference of the community[Code: <https://github.com/jianzhuozhuTHU/PE-RRTstar>.].
§ INTRODUCTION
Autonomous navigation technology for unmanned ground vehicle (UGV) has developed rapidly in recent years, and its application scenarios are gradually expanding from indoors<cit.><cit.> to outdoors<cit.><cit.>. But autonomous navigation on uneven vegetated terrain remains a very challenging task. Due to the presence of vegetation, the robot's perception of the environment becomes inaccurate and more time-consuming, and the generated path also deviates from the real situation of the surrounding environment.
Existing autonomous navigation methods usually take the vegetation as the obstacle, but this method is too conservative, because for the shorter penetrable vegetation, the wheeled robots have the ability to pass through vegetation. Also, traditional methods usually need to build a prior traversability map <cit.><cit.> for navigation, which takes a lot of time, especially when accurate support ground estimation is needed in vegetated environment. LiDAR is often used to generate point cloud map of the surrounding environment<cit.><cit.>. Thus, there are two challenges: 1) the point cloud of vegetation does not correspond to the rigid geometry of the support ground; 2) generating the explicit traversability map estimating the support ground of the surrounding environment is time-consuming.
This work presents a real-time and safe path generation method to support autonomous navigation on vegetated terrain for wheeled robots. To solve challenge 1), we design a hybrid vegetated terrain estimation method, which fuses proprioception and external perception to generate support plane. The support plane is used describe the local geometrically rigid terrain. To solve 2), the support plane estimation is integrated to sampling algorithm for path planning, which reduce the process time since the traversability map construction computation is skipped. In addition, the inflation radius is added to the sampling algorithm to enhance safety.
This work offers the following contributions:
* A novel approach to accurately estimate the support plane is proposed, in which Multivariate Gaussian Process Regression (MV-GPR) based proprioception and external perception are fused considering uncertainty weighting.
* PE-RRT* (Plane Estimation RRT*), a sampling-based global path generation method is proposed, which achieve feasible, collision-free, and asymptotically optimal path generation in vegetated environment without explicit map construction.
* We build the experimental platform and conduct real-world experiments. The effectiveness of our method is confirmed by comparison with existing methods.
§ RELATED WORK
In vegetated terrain, support surface is often invisible to external sensors.
Therefore, the accurate perception of the support terrain is the premise of path planning. Some devices are designed to sense directly the ground. In <cit.>, authors use an array of miniature capacitive tactile sensors to measure ground reaction forces (GRF) to distinguish among hard, slippery, grassy and granular terrain types. <cit.> produces a self-supervised mechanism to train the trafficability prediction model based on resistance coefficients determined from the current and force, which is used to estimate the trafficability of regions of dirt, light vegetation, and heavy brush. However, the coupling of the above methods with motion planners has not been completed. Moreover, as the length of the trajectory increases and the terrain becomes more varied, the algorithm quality begins to degrade.
Some methods attempt to traverse vegetation based on external sensors. <cit.> assumes that ground heights smoothly vary and terrain classes tend to cluster, and uses Markov random fields to infer the supporting ground surface for navigation based on LiDAR points. <cit.> defines a regression problem which estimates predicted error between the realized odometry readings and the predicted trajectory. And they utilize machine learning techniques to predict model error associated with an RGB image. However, this method lacks robustness to environmental changes and cannot ensure safety.
Combining proprioception and external perception to improve robustness is considered to be a common and effective approach. <cit.> provides robustness of hexapod locomotion in high grass by switching between two locomotion modes based on proprioceptive and exteroceptive variance estimates. In <cit.>, the authors propose an attention-based recurrent encoder integrating proprioceptive and exteroceptive input. This approach is applied to quadrupeds and validated experimentally. And in <cit.>, the authors apply Gaussian process regression (GPR) to estimate support surface including the height of the penetrable layer.
However, the above work has to build a prior map first, and then analyze the travesability of each foothold, which can cause large computational expense and cost a lot of time. And for the more commonly used wheeled robots, autonomous navigation pays more attention to the overall properties of the ground beneath the robot.
In our work, we propose the PE-RRT* algorithm, which avoids the explicit maps by sampling to significantly reduce computational expense. We describe the ground as the set of circular planes, and fuse the height and slope of the planes generated by MV-GPR<cit.> by taking the variance as the weight.
§ PROBLEM FORMULATION
Our objective is to to generate a global path on the rigid geometric surface based on point cloud representing the vegetated environment. In our work, we simplify the local geometric support terrain of a single point into a support plane (S-Plane) _S:={ x,y,z,r,p }, which contains the roll angle r∈ℝ, pitch angle p∈ℝ and the 3D coordinates [x,y,z]^T∈ℝ ^3 of plane center. We address the problem defined as follows: In the unknown vegetated terrain, given the initial and target state projection x_start, x_goal∈ℝ ^2, search a feasible and optimal global path consisting of W nodes ={( _S,i) _i=1:W}. Alone path , the wheeled robot can move from x_start to x_goal. The path should satisfy: 1) the robot can pass safely along the path; 2) avoiding collision with obstacles along the path; 3) reduce time spent on the move; 4) minimizing the risk of the robot being unable to maintain a stable posture.
The workflow of our entire system is shown in Fig.<ref>. Our navigation algorithm is a two-layer structure including global and local planner. The global planner generates a safe and feasible global path in real time, which is the main content of our research. The global planer contains two parts: PE-RRT* which will be detailly described in Sec.<ref>, and S-Plane Estimation which will be detailly described in Sec.<ref>.
§ IMPLEMENTATION
Commonly used path planning frameworks require building a priori or real-time explicit map. Traversability analysis of the map is performed before path planning, which costs too much time. To solve this problem, we propose PE-RRT*, a sampling-based path planning algorithm. In PE-RRT*, we sample and analyze directly on the point cloud, avoiding building an explicit traversability map. PE-RRT* algorithm will be described in detail in <ref>. For each node, proprioception and external perception are performed in subsection <ref> and <ref>, and parameter gets estimated in real time in subsection <ref>. The fusion process of proprioception and external perception to generate S-Plane is in subsection <ref>. For ease of understanding, we first introduce the relevant mathematical basis in <ref>.
§.§ MV-GPR
Gaussian process (GP) regression has been proven to be effective in robot navigation<cit.><cit.>. However, the classical GP can't deal with the multi-response problem because of its definition on ℝ. As a result, the correlation between multiple tasks cannot be taken into consideration. To overcome this drawback, <cit.> proposes multivariate Gaussian process regression (MV-GPR) to perform multi-output prediction. Its precise definition based on Gaussian measures and the existence proof is introduced in <cit.>.
f represents a multivariate Gaussian process with its mean function u:𝒳↦ℝ ^d, kernel k:𝒳×𝒳↦ℝ and positive semi-definite parameter matrix ∈ℝ ^d× d.
And Multivariate Gaussian Process (GP) can be denoted as f∼ℳ𝒢𝒫( u,k,). For n pairs of observations {( x_i,y_i ) } _i=1^n,x_i∈ℝ ^p,y_i∈ℝ ^1× d, we assume the following model:
f∼ℳ𝒢𝒫( u,k',)
Different from conventional GPR method, MV-GPR adpots the noise-free regression model, thus y_i=f( x_i ) for i = 1,⋯ ,n. And the noise variance term σ _n^2 is added into the kernel k^'=k( x_i,x_j ) +δ _ijσ _n^2, in which δ _ij=1 if i=j, otherwise δ _ij=0.
With matrix form [ f( x_1 ) ,⋯ ,f( x_n ) ] ^T∈ℝ ^n× d,
the joint matrix-variate Gaussian distribution <cit.> can be represented as:
[ f( x_1 )^T ,⋯ ,f( x_n )^T] ^T∼ℳ𝒩( M, ,)
where mean matrix M∈ℝ ^n× d, covariance matrix ∈ℝ ^n× n, ∈ℝ ^d× d and X=[ x_1,⋯ ,x_n ] ^T represents the location of training set.
To predict variable f_*=[ f_*,1,⋯ ,f_*,m] ^T with the location X_*=[ x_n+1,⋯ ,x_n+m] ^T where m represents the test set number, the joint distribution of
the training observations Y=[ y_1^T,⋯ ,y_n^T] ^T and f_* is
[ [ Y; f_*; ]] ∼ℳ𝒩( 0,[
K^'( X,X ) K^'( X_*,X ) ^T
K^'( X_*,X ) K^'( X_*,X_* )
] ,)
where K^' is the covariance matrix of which the (i, j)-th element [ K^'] _ij=k^'( x_i,x_j ). Based on marginalization and conditional distribution theorem<cit.><cit.>, the predictive distribution is derived as
p( f_*|X,Y,X_* ) =ℳ𝒩( M̂,,)
where
M̂=K^'( X_*,X ) ^TK^'( X,X ) ^-1Y
= K^'( X_*,X_* )
-K^'( X_*,X ) ^T K^'( X,X ) ^-1K^'( X_*,X )
=
According to the above formulas, the expectation and variance are respectively 𝔼[ f_* ] =M̂ and cov( vec( f_*^T) ) =⊗. When the dimension of the output variable d=1 and covariance matrix =I, it means the process transitions from multivariate to Univariate.
§.§ PE-RRT*
Fig.<ref> shows the structure of PE-RRT*. When a 2D node containing the x and y coordinates is obtained by "Sample" and "Steer" operations, plane estimation module (shown in the blue box) try to find its corresponding S-Plane.
In this module, Surface Plane (Surf-Plane), Proprioception Support Plane (Pro-Plane) and Externel Perception Support Plane (EP-Plane) are proposed to calculate S-Plane. In the above planes, "Height" means the z-coordinate of the plane center, "Roll" and "Pitch" means the orientation. Surf-Plane is first fitted from point cloud based on RANSAC method<cit.>.
Using Prev-Trajectory traversed by robot as the training set, MV-GPR is applied to generate the height and orientation of Pro-Plane. Vegetation depth of new node is generated by single response MV-GPR, where the output of training set is obtained by Surf-Plane height subtracting Prev-Trajectory height.
The EP-Plane height can be obtained by Surf-Plane height subtracting the vegetation depth, and its orientation is obtained directly from Surf-Plane.
Combining the height, roll and pitch from Pro-Plane and EP-Plane based on variable weights calculated from uncertainty, we can get S-Plane, on which traversability (including uncertainty, vegetation height, slope) can be evaluated. After the 'Obstacle' and 'Inflation' check and 'PurnBranch' operation, 'Connect' and 'Optimize' operations are performed. Thus a new 3D node is obtained and RRT tree can be expanded.
PE-RRT* algorithm is based on informed-RRT* algorithm <cit.> which is widely used in the field of path planning, efficiently integrates the process of S-Plane estimation into the RRT tree expansion. Especially, we introduce an inflation radius during the sampling process to enhance safety. The flow of the PE-RRT* is shown in the Alg.<ref>. Some new subfunctions presented in Alg.<ref> are described as follows while subfunctions common to the informed-RRT* algorithm can be found in <cit.><cit.>.
Surf-Plane _Surf, Pro-Plane _Pro, EP-Plane _EP and S-Plane _S all consist of a 3D plane center point, roll angle and pitch angle.
* Proprioception(ζ,x_new): Given the robot's Prev-Trajectory ζ and the node's 2D coordinate x_new, the Pro-Plane is returned. The implementation will be discussed in detail in part <ref>.
* ExPerception(_Surf,ζ,x_new): Given the robot's Prev-Trajectory ζ, the Surf-Plane _S and the node's 2D coordinate x_new, the EP-Plane is returned. The implementation will be discussed in detail in part <ref>.
* SupportFuse(_EP,_Pro): Given the Pro-Plane _Pro and EP-Plane _EP, the fused S-Plane is returned. The implementation will be discussed in detail in part <ref>.
* ObsCheck(_Surf, _S): Given the S-Plane _S and Surf-Plane _Surf, we use the height h of vegetation as the criterion for judging obstacles, and it can be obtained by h=z_Surf-z_S, where z_S represents the height of _S and z_Surf represents the height of _Surf. When the vegetation in an area is too high, there are usually rigid trees, which can cause collisions. So we define a threshold value h_crit, when h>h_crit, the node is considered to be obstacle, the function returns "True", otherwise "False".
* InflationCheck( _S, _Obs): Given the S-Plane _S and obstacle set _Obs, we define an inflation radius r as shown in Fig.<ref>, if for any element in _Obs, its Euclidean distance in 2D x-y space from _S is greater than r, then the function returns "True", otherwise "False".
* PurnBranch(T, _S): Given the RRT tree T and S-Plane _S, for each node in T, if its Euclidean distance in 2D x-y space from _S is smaller than r, the node and its branch will be deleted.
* TraEvaluation(_S): Given the S-Plane _S, the traversability is obtained from the slope and the uncertainty of _S. The implementation will be discussed in detail in part <ref>.
Forfordo
ruled
§.§ S-Plane Estimation
In order to generate S-Plane, we perform proprioception and external perception on the node to generate Pro-Plane and EP-Plane respectively.
§.§.§ Proprioception
The Proprioception of the robot usually depends on the sensors of the robot itself (wheel speedometer, IMU, etc.), but usually causes cumulative errors. In our experiments, FAST-LIO2.0<cit.> is adopted as an odometer, in which the information of IMU and LiDAR is fused to improve the positioning accuracy.
In this module, MV-GPR is used for estimate Pro-Plane _Pro of new node. To reduce the computational expense, the training size has to be limited<cit.>. We record the position { x_i,y_i,z_Pro,i} _i=1:N and pose { r_Pro,i} _i=1:N, { p_Pro,i} _i=1:N of the previous N steps of the robot. The training input data comprises the horizontal position of the prev-trajectory X=[ [ x_1,y_1 ] ^T,⋯ ,[ x_N,y_N ] ^T] ^T∈ℝ ^N× 2, while the output data is defined as Y_Pro=[ [ z_Pro,1,r_Pro,1,p_Pro,1] ^T,⋯ ,[ z_Pro,N,r_Pro,N,p_Pro,N] ^T] ^T∈ℝ ^N× 3. Note that in order to ensure that the yaw angle make no difference to the slope, we extract roll r and pitch p from rotation matrix R^i, which can be obtained from odometer. And Y_Pro,i=[ z_Pro,i,r_Pro,i,p_Pro,i] ^T for i=1⋯ N, where
p_Pro,i^=atan 2( R_31,i^,√(( R_32,i^) ^2+( R_33,i^) ^2))
r_Pro,i^=atan 2( -R_32,i/cos( p_Pro,i),R_33,i/cos( p_Pro,i))
.
Quantifying uncertainty is crucial for assessing the accuracy of plane estimations, which will be discribed in detail in <ref>.
For proprioception, the uncertainty σ _n,Pro^2 of the training set is from TF, which is set to be a constant value in our experiment. In MV-GPR, the covariance matrix depends on inputs and the kernel function k. Compared to other kernel functions (such as linear, rational quadratic and Matern<cit.>), squared exponential (SE) kernel is more commonly used due to its simple form and many properties such as smoothness and integrability with other functions. The kernel is defined as:
k_SE( x,x^') =s_f^2exp( - x-x^' _2^2/2l^2)
where s_f^2 is overall variance and l is kernel length scale. Due to the properties of SE kernel, when the distance between inputs (Euclidean distance) is farther, the variable z, r and p variance becomes larger, which means that the Pro-Plane estimated by proprioception becomes more uncertain.
We take the 2D coordinates x_new=[x_*,y_*] of a single node as the input of the test set, and {X,Y_Pro} as the training set, according to formula <ref> <ref> <ref>, we can get the prediction of height ẑ_Pro,* and pose r̂_Pro,*, p̂_Pro,* of the node. Thus, we can get the estimation of Pro-Plane is _Pro={x_*,y_*,ẑ_Pro,* ,r̂_Pro,*,p̂_Pro,*}. The height variance σ _Pro,z,*^2, roll variance σ _Pro,r,*^2 and pitch variance σ _Pro,p,*^2 can be obtained from the Kronecker product of and .
§.§.§ Externel Perception
Compared with proprioception, external perception relies on point cloud map generated by LiDAR. To get a new EP-Plane _EP, we first fit the Surf-Plane _Surf corresponding to the 2D node. Compared to the SVD method used in PF-RRT*<cit.>, we adopt RANSAC method to fit a plane, which can avoid the influence of tall rigid obstacles (such as tall trees, large stones) on the slope of the fitted Surf-Plane.
For slope estimation of EP-Plane, we consider roll and pitch of EP-Plane and Surf-Plane to be the same: r_EP,*=r_Surf,*, p_EP,*=p_Surf,*, due to the assumption of uniformity and continuity of penetrable vegetation. And so is the corresponding variance: σ _EP,r,*^2= σ _Surf,r,*^2, σ _EP,p,*^2= σ _Surf,p,*^2. The variance of σ _Surf,r,*^2 and σ _Surf,p,*^2 obtained by the empirical formula:
σ _Surf,r,*^2=κ _r _k=1^K[ n·( x_ ,*^k-x_Surf,*) ] ^2/K-1
σ _Surf,p,*^2=κ _p _k=1^K[ n·( x_ ,*^k-x_Surf,*) ] ^2/K-1
where the Surf-Plane envelops K points on the point cloud map, the k-th point's 3D coordinate is x_ ,*^k∈ℝ ^3, and the plane center is x_Surf,*∈ℝ ^3. n represents the normal vector of Surf-Plane. κ _r and κ _p are constant coefficient.
And the estimation of z_EP is more complicated. Vegetation depth H is introduced as an intermediate variable for estimating z_EP. Take Prev-Trajectory X=[ [ x_1,y_1 ] ^T,⋯ ,[ x_N,y_N ] ^T as the inputs and corresponding vegetation depth Y=[ H_1,⋯ ,H_N ] ^T as the outputs. For the i-th vegetation depth H_i, it can be obtained as H_i=z_Surf,i-z_Pro,i, where z_Surf,i is the Surf-Plane height. Its uncertainty σ _H,i^2=σ _Pro,z,i^2+σ _Surf,z,i^2 contains the uncertainty σ _Pro,z,i^2 from TF and the uncertainty σ _Surf,z,i^2 from Surf-Plane due to the independence assumption. And σ _Surf,z,i^2 is defined as:
σ _Surf,z,i^2=∑_k=1^K( z_ ,i^k-z_Surf,i) ^2/K-1
where the height of the k-th point is z_ ,i,k, and the height of the plane center is z_Surf ,i for i-th Surf-Plane.
Thus the vegetation depth Ĥ_* of a new node and its variance σ _H,*^2 can be obtained based on the equation <ref> <ref> <ref>. And for the new node, the height of EP-Plane is ẑ_EP,*=z_Surf,*- Ĥ_*, and its variance σ _EP,*^2=σ _H,*^2+σ _Surf,z,*^2 consists of two parts: the covariance generated from GPR σ _H,*^2; the covariance of Surf-Plane σ _Surf,z,*^2. Note that the σ _Surf,z,*^2 calculation of Surf-Plane is consistent with formula <ref>.
§.§.§ Parameter Estimation
In the process of the robot moving forward, in order to ensure the accuracy of MV-GPR, it is necessary to estimate its parameters in real time. For proprioception which contains a 3-variate Gaussian process, the estimated parameters include kernel matrix parameters s_f^2, l^2, covariance matrix = ^T, where for ψ _11,ψ _22,ψ _33,ϕ _31,ϕ _21,ϕ _32∈ℝ:
=[
e^ψ _11 0 0
ϕ _21 e^ψ _22 0
ϕ _31 ϕ _32 e^ψ _33
]
to ensure the positive definiteness of the matrix.
We use the maximum likelihood method to estimate the parameters. For negative log marginal likelihood
ℒ = nd/2ln( 2π) +nd/2ln( K+σ _n^2) +d/2ln( )
+1/2tr( ( K+σ _n^2) ^-1Y ^-1Y^T)
The derivatives of the negative log marginal likelihood with respect to parameter s_f^2, l^2, ψ_ii and ϕ_ij can be obtained. Formula derivation reference <cit.>.
For external perception witch is a Univariate Gaussian process, we only need to estimate the kernel s_f^2, l^2.
§.§.§ Plane Fusion
The vegetation height varies in different environments. On the grassland, the vegetation is usually short, and the point cloud returned by the LiDAR is relatively smooth; while in the bushes, the vegetation is usually high and uneven, and the point cloud is rougher; And for tall trees, it is considered to be impassable. As shown in Fig.<ref>, for Pro-Plane, the source of variance is mainly the Euclidean distance, it can more accurately estimate the terrain conditions of the nearby area, but has a poor estimation for the far terrain; for EP-Plane, the source of variance includes both the distance and the the surface condition of the point cloud. In order to accurately estimate the support ground in different environments, the variance is used as a weight to fuse Pro-Plane and EP-Plane. We define the weight as follows:
w_[·]=σ _EP,[·],*^2/( σ _EP,[·],*^2+σ _Pro,[·],*^2)
where the symbol [·] here is to refer to r, p and z for simplifying the formula. Thus the estimation of S-Plane _S,*={ x_*,y_*,ẑ_S,*,r̂_S,*,p̂_S,*} can be obtained as:
[̂·̂]̂_S,*=w_[·][̂·̂]̂_Pro,*+( 1-w_[·]) [̂·̂]̂_EP,*
When the point cloud in the area where the robot is driving is relatively cluttered, w_z, w_r and w_p will become larger, and the robot will trust proprioception more; otherwise, the robot will trust external perception more, as shown in Fig.<ref>.
Note that when the vegetation height exceeds the threshold h_crit, it is considered as an obstacle, and the RRT tree will delete the node and nearby nodes to ensure the safety of the robot during driving.
In the process of RRT tree generation, proper introduction of traversability can improve the safety and stability of the path. When the vehicle is driving, we often pay less attention to the road conditions in small areas (pebbles, clods, etc.). Instead, we are more concerned about the information of the slope s, the uncertainty ε and the vegetation height h. The robot travels on terrain with shallow vegetation and small slope, so it is less likely to slip. The slope s can be obtained from roll and pitch of the S-Plane:
s=arccos( cos( r̂_S,*) cos( p̂_S,*) )
And ε can be obtained from σ _S,z,*^2, σ _S,r,*^2,σ _S,p,*^2:
ε =σ _S,z,*^2+μ *( σ _S,r,*^2+σ _S,p,*^2)
where μ is a constant coefficient. Thus, the traversability τ can be described as:
τ =α_1s/s_crit+α_2ε/ε_crit+α_3h/h_crit
where α_1, α_2, and α_3 are weights which sum to 1. s_crit, ε_crit, and h_crit, which represent the maximum allowable slope, uncertainty, and vegetation height respectively, are critical values that may cause collision or rollover. In PE-RRT*, cost includes Euclidean distance d from parent node and traversability: Cost=d/( 1-τ). When the RRT tree is expanded, the nodes with lower cost will be selected first. With the increase of sampling points, the generated path will gradually tends to be optimal.
§ EXPERIMENTS
In the real scenarios, we conduct experiments to verify the effectiveness of our work utilizing the physical platform illustrated in Fig.<ref>. Our algorithm works under ROS Melodic operating system, generating the global path at 2Hz and local path at 10hz by NMPC method using CasADI. The resolution of global map is set to 2cm, the radius for plane estimation is 15cm, and the inflation radius is set to 25cm. Note that the starting point is the origin of the map, i.e x_start=[ 0,0 ] ^T. Experiments are conducted in three different scenarios: an untended hillside, a lushly planted garden and a regularly maintained park.
In the first scenario, the target point is set to be x_goal=[ 11.5,2.7 ] ^T. The robot traverses an incline section populated with weeds, featuring grass of varying heights ranging from 0.1m to 0.2m, before proceeding to a relatively flat crest area. Finally, the robot navigates an uphill and reaches the designated target point. In this given scenario, the S-Plane exhibits significant variation in both height and slope. Closer to the robot, the uncertainty of proprioception is relatively small, resulting in a higher weight. In contrast, in regions further away from the robot, the uncertainty of proprioception increases substantially due to MV-GPR, resulting in a decrease in weight and higher reliance on external perception.
The screenshots of our algorithm in the main scenario are shown in Fig <ref>. The robot chooses to generate path where the Vegetation Height is smaller, as shown in Fig.<ref>(a). If the robot detects an obstacle (long red bar), it navigates avoiding the obstacle and continues moving forward, as depicted in Fig.<ref>(b). Once the robot enters a safe area with little grass cover, it engages in longer-range global path planning preferring gentle slopes of the supporting plane, as illustrated in Fig<ref>(c). In the end, the robot reaches the target point and stops, as depicted in Fig<ref>(d).
In order to evaluate the proposed method, we compare the it with 3 baseline approaches:
* PF-RRT*<cit.>: RRT* in which each node fits the plane directly on the point cloud map.
* Pro-RRT*: RRT* in which each node estimates the S-Plane directly based on the Prev-trajectory.
* RRT*+PrevMap: Estimate the S-Plane based on Gaussian Process Regression to generate the previous traversability map. Based on the map, RRT* is used to obtain the global path.
Each algorithm generates trajectories with different colors is shown in Fig <ref>. PF-RRT* can’t generate the optimal global paths by traversability with inaccurate prediction of the height and slope of the sampled points due to the difference between the S-plane and the Surf-plane. RRT*+PrevMap fails and collides with the tree since it can’t use the point cloud information to avoid obstacles. RRT*+PrevMap occurs with several lags because it has to build an explicit traversability map, which is time consuming. Ours efficiently and accurately estimates the height and slope of the node, ensuring the asymptotic optimality of global path generation and smooth obstacle avoidance. Thanks to the precise estimation, our algorithm avoids densely distributed contour lines which means steep slopes and chooses gentler ones, which is not possible with other algorithms.
To intuitively compare the performance of different algorithms, we adopt the following indicators to compare the four algorithms:
∙ Path len: length of the path from the start to the end.
∙ Safety deg: minimum distance to the obstacle.
∙ Cons time: consuming time from start to goal.
∙ Comp time: computation time to generate a global path.
∙ Speed dev: speed deviation of the robot, reflecting the stability of the robot during navigation.
And the results of the evaluation are presented in Table <ref>.
It shows that our algorithm can efficiently give the feasible and safe path for the global planning while avoiding collisions and passing through regions with high Vegetation Height. Due to the combined sampling algorithm, the PE-RRT* algorithm saves a lot of time compared to RRT*+PrevMap method.
In order to demonstrate the generalizability of our algorithm, we conduct experiments in other scenarios,as shown in Fig <ref>.
In the garden scenario, the robot sets off from the bare ground, avoiding the bushes outside the inflation radius, and finally reaches the target point. In the park scenario, the robot prefers paths on which the grass is shorter while avoiding trees. Here, the height and slope of the support plane are almost constant with occasional small changes, resulting in low uncertainty and high weight of the proprioception. More details can be available at [Video: <https://youtu.be/EeZ-JXaiXuw>.].
§ CONCLUSION
This paper proposes a novel path planning method (PE-RRT*) on vegetated terrain for wheeled robots based on sampling tree and support plane estimation. Also inflation radius is add into RRT tree to avoid collision. Proprioception and external perception are fused to generate support plane, in which MV-GPR is used to predict the roll, pitch and height of the plane. We integrate the PE-RRT*, NMPC and SLAM modules into a complete system system for safe autonomous navigation. In addition, we compare our method with three baselines (PF-RRT*, Pro-RRT* and RRT*+PrevMap) in real scenarios and conduct experiments in different scenarios. The experimental results show that our method is safer and more efficient than other methods in global path planning.
IEEEtran
|
http://arxiv.org/abs/2306.06989v1
|
20230612093449
|
Please, not \textit{another} note about Generalized Inverses
|
[
"Philipp Wacker"
] |
math.CA
|
[
"math.CA",
"math.FA",
"math.PR"
] |
Calculation of isotope shifts and King plot nonlinearities in Ca^+
A. Surzhykov^1,2
Institute for Statistics, University of Bremen, Germany
==================================================================
We prove some statements of left- and right-continuous variants of generalized inverses of non-decreasing real functions.
§ INTRODUCTION
This short manuscript fills a few theoretical gaps in the recorded knowledge about generalized inverses (also called quantile functions in the context of probability theory), and corrects a few inaccuracies in the existing literature. While there is a certain overlap with parts of <cit.>, none of these give the full picture of generalized inverses, and there are persistent errors that need rectification. Finally, this note[this is a popular title for communicating results about generalized inverses, see the references section.] presents some (as far as the author is aware) new insights about the exact form of T∘ T^-1 and T^-1∘ T, where T^-1 is a generalized inverse.
Notation: In the following, we define f(x+) = lim_↘ xf(x+) and f(x-) = lim_↘ xf(x-) for a function f:→. Similarly, f(-∞) = lim_x→ -∞f(x) and f(∞) = lim_x→ +∞f(x). A non-decreasing function is said to be a map f:→ such that x<y implies f(x)≤ f(y). We denote by = ∪{-∞,∞} the set of the extended real numbers.
We start by defining generalized inverses.
Let T:→ be a non-decreasing function where we set T(-∞) = lim_x→-∞T(x) and T(∞)=lim_x→∞T(x). Then the generalized inverses T^+:→ and T^-:→ of T are defined by
T^+(y) = inf{x∈: T(x) > y}
T^-(y) = inf{x∈: T(x) ≥ y}.
with the convention that inf∅ = ∞ and inf = -∞.
<cit.> proved that we can equivalently write T^+(y) = sup{x∈: T(x) ≤ y} and T^-(y) = sup{x∈: T(x) <y}, as long as we make sure that the domain of T is the whole of .
T^+ and T^- are the right- and left-continuous generalized inverses of T, in the sense outlined by lemma <ref><ref> below.
§ USEFUL STATEMENTS FOR WORKING WITH GENERALIZED INVERSES
We follow up with a list of elementary properties of T^+. Parts <ref>–<ref> are a generalization of <cit.> to the case of both T^+ and T^-. Part <ref> is proven in <cit.>. Parts <ref> is similar to <cit.>, and prove and sharpen all results to cover both the case T^+ and T^-. <ref> and <ref> are new and show what we can say if T is left- or right-continuous. Parts <ref> and <ref> correct a mistake in <cit.> (see remark below), and generalize the statement to handle T^-, as well.
Let T:→ be a nondecreasing map.
*
* T^+(y) = -∞ if and only if T(x) > y for all x∈.
* T^+(y) = ∞ if and only if T(x)≤ y for all x∈.
* T^-(y) = -∞ if and only if T(x) ≥ y for all x∈.
* T^-(y) = ∞ if and only if T(x)< y for all x∈.
* T^+ and T^- are nondecreasing.
* T^+ is right-continuous, and T^- is left-continuous. For all y∈,
* T^+(y-) = T^-(y-) = T^-(y)
* T^-(y+) = T^+(y+) = T^+(y)
* T^+ is continuous at y if and only if T^- is continuous at y.
* T^-(y)≤ T^+(y)
* T^-(y) = T^+(y) if and only if Card (T^-1({y})≤ 1.
* The following relations hold:
* If y ≤ T(x), then T^-(y) ≤ x. Equivalently,
if x < T^-(y), then T(x) < y.
* If y < T(x), then T^+(y)≤ x. Equivalently,
if x < T^+(y), then T(x) ≤ y.
* T^-(T(x))≤ x
* If y > T(x), then T^-(y) ≥ x. Equivalently,
if x > T^-(y), then T(x)≥ y.
* If y ≥ T(x), then T^+(y) ≥ x. Equivalently,
if x > T^+(y), then T(x) > y.
* T^+(T(x))≥ x.
* If T^+(T(x)) = T^-(T(x)), then T^+(T(x)) = T^-(T(x)) = x
* T(T^+(y)-) ≤ y
* Let T be right-continuous at x. Then the following relations hold:
* If y>T(x), then T^-(y) > x. Equivalently,
if x≥ T^-(y), then T(x)≥ y.
* If y>T(x), then T^+(y) > x. Equivalently,
if x≥ T^+(y), then T(x)≥ y.
* y ≤ T(x) if and only if T^-(y)≤ x.
* If T is right-continuous at x=T^+(y), then T(T^+(y)) ≥ y. If T is right-continuous at x=T^-(y), then T(T^-(y))≥ y.
* Let T be left-continuous at x. Then the following relations hold:
* If y<T(x), then T^-(y) < x. Equivalently,
if x≤ T^-(y), then T(x)≤ y.
* If y<T(x), then T^+(y) < x. Equivalently,
if x≤ T^+(y), then T(x)≤ y.
* y ≥ T(x) if and only if T^+(y)≥ x.
* If T is left-continuous at x=T^+(y), then T(T^+(y)) ≤ y. If T is left-continuous at x=T^-(y), then T(T^-(y))≤ y.
* If T is continuous at T^+(y), then T(T^+(y)) = y. If T is continuous at T^-(y), then T(T^-(y)) = y.
* If T is constant on an interval I=(x_1, x_2), then for all x∈ I we have T^+(T(x)) > x > T^-(T(x)).
* We define the left-continuous and right-continuous versions T_l(x):=T(x-) and T_r(x):=T(x+) of T. Then T_l^+ = T_r^+ as well as T_l^-=T_r^-.
<ref>, <ref>, <ref>, <ref> and <ref> follow immediately from elementary properties of the infimum as well as the monotonicity of T. We just prove part of <ref> for illustration: Let A_0 = {x:T(x)>y} and A_ = {x:T(x)>y+}. Then A_0 = ⋃_>0A_ and thus
T^+(y) = inf A_0 = inf_>0inf A_ = inf_>0T^+(y+)
= lim_↘ 0T^+(y+).
Regarding <ref>:
<ref> Follows directly from definition and the infimum: Let T(x) ≥ y, then x∈ A:= {ξ∈: T(ξ) ≥ y}, i.e. T^-(y) = inf A ≤ x.
<ref> Follows directly from definition and the infimum: Let T(x) > y, then x ∈ A:= {ξ∈: T(ξ) > y}, i.e. T^+(y) = inf A ≤ x
<ref> This follows from <ref>, by setting y=T(x).
<ref> We assume that y > T(x). Thus for any ξ∈ with the property that T(ξ)≥ y, we have T(ξ)>T(x), i.e. ξ > x by monotonicity of T. Since A⊂ B implies inf A ≥inf B, this shows T^-(y) = inf{ξ∈: T(ξ) ≥ y}≥inf{ξ∈: ξ > x} = x.
<ref> We assume that y≥ T(x). Thus for any ξ∈ with the property that T(ξ)>y, we have T(ξ)>T(x), i.e. ξ > x by monotonicity of T. Since A⊂ B implies inf A ≥inf B, this shows T^+(y) = inf{ξ∈: T(ξ) > y}≥inf{ξ∈: ξ > x} = x.
<ref> This follows from <ref>, by setting y=T(x)
<ref> Follows from <ref> and <ref> .
<ref> Clearly, x:=T^+(y)- < T^+(y), thus by <ref>, T(x) = T(T^+(y)-) ≤ y, which shows the statement via → 0.
Regarding <ref>:
<ref> We prove the equivalent characterization: Let x≥ T^-(y). We choose a sequence ξ_n↘ x, with ξ_n > x, hence ξ_n > T^-(y), and thus T(ξ_n) ≥ y (by <ref>). Using right continuity of T, we see that T(x) = lim_n T(ξ_n)≥ y.
<ref> This follows from <ref> and <ref>
<ref> is a direct implication of <ref> and <ref>
<ref> follows from <ref> and <ref> by setting x=T^-(y) and T^+(y), respectively.
<ref> is proven quite similarly to <ref>.
<ref> is proven quite similarly to <ref>.
Regarding <ref>: Assuming continuity of T at T^+(y) or T^-(y), respectively, the statement follows from an application of <Ref> and <Ref>.
<ref> is proven as follows. Since T(x)=y for all x∈ (x_1,x_2), <ref> and <ref> imply that T^-(y) ≤ x for all x∈ I as well as T^+(y) ≥ x for all x∈ I, i.e. by taking the limit x→ x_1/2, we have T^-(y)≤ x_1 and T^+(y) ≥ x_2. Thus, for any x∈ I, we get T^+(T(x)) = T^+(y) ≥ x_2 > x > x_1 ≥ T^-(y) = T^-(T(x)).
Lastly we prove <ref>. Let T be an arbitrary non-decreasing function, and T_r as above. We fix an arbitray y and set M_l={x:T(x-) > y} and M_r={x: T(x+)>y}, and thus T_l^+(y) = inf M_l and T_r^+(y) = inf M_r. By elementary inclusion M_l⊆ M_r, i.e. T_l^+(y)≥ T_r^+(y). It remains to show the opposite inequality. If M_l = M_r, then the statement follows directly. Otherwise, let x^⋆∈ M_r∖ M_l which means that T(x^⋆-) ≤ y and T(x^⋆+)>y. We will now show that x^⋆ is a lower bound for M_r. Indeed, let x < x^⋆, then T(x+)≤ T(x^⋆-)≤ y, i.e. x∉M_r. Since M_l⊆ M_r, this also shows that x^⋆ is a lower bound for M_l. Now we show that x^⋆∈M_l. Indeed, if we set >0, then T((x^⋆ + )-)≥ T(x^⋆+)> y, i.e. for all >0, x_ := x^⋆ + ∈ M_l. Since x_↘ x^⋆, x^⋆∈M_l. By inclusion, x^⋆∈M_r, as well. x^⋆ being a lower bound and a limit point proves that x^⋆ = inf M_l, and x^⋆ = inf M_r, i.e. x^⋆ = T_l^+(y) = T_r^+(y). The version of with T^- is proven analogously.
<cit.> comments on the fact that there are errors in previous work on generalized inverses and constructs a series of four statements in published manuscripts with counterexamples for why they are wrong, but without providing a way of resolving these contradictions. This manuscript does: <ref>, <ref>, and <ref> give the exact conditions for what we can say about T∘ T^+ as well as T∘ T^-. In particular, the counterexamples from <cit.> do not apply here, since ∞ is not a point of (right- or left-)continuity of T (it does not even make sense to think about this).
There is one particular application that is especially interesting in practice, which is the inverse sampling method for univariate random variables. This is a well-known fact, but it is an elementary direct result of the previous lemma.
Let (Ω,ℬ, ℙ) be a probability space and X:Ω→ a random variable with F_X being its cumulative distribution function. Then the push-forward of a uniform random variable U([0,1]) under the generalized inverse F_X^- is the law of X. This means that we can generate independent samples x_i∼ U([0,1]), plug them into F_X^-, and the {F_X^-(x_i)} will be samples from X.
Since F_X is right-continuous, we know (from <Ref><ref>) that y ≤ F_X(x) if and only if F_X^-(y) ≤ x. We compute the cumulative distribution function of F_X^-(U):
ℙ(_X^-(U)≤λ) = ℙ(U≤ F_X(λ))
= F_X(λ),
since the cumulative distribution function of U is given by ℙ(U≤ r) = r (for r∈ [0,1]). This means that the law of F_X^-(U) is identical to the law of X.
§ JUMPS AND PLATEAUS
The following two lemmata are an adaptation of <cit.>, and can also be found in <cit.>, but because the former is concerned with T^- instead of T^+, discusses “third order terms” like T(T^+(T(x))) > T(x) instead of “second order terms” like T^+(T(x)) > x, and does not prove maximality of the half-open intervals involved, and the latter has a typo (and refers to the proof to the former instead of providing a direct proof), we give a proof of this statement for completeness' sake. Additionally, we fix an error in the literature (see remark <ref> below).
The following statements relate plateaus and jumps of T and T^± to one another. For a visualization of these connections, see <cit.>.
Let T be nondecreasing.
* If
T^+(y) > T^-(y)
then for any x∈ (T^-(y),T^+(y)) = I, both
T(x) = y and
T^+(T(x)) > x > T^-(T(x))
and there is no greater interval than I of the same type such that (<ref>) holds.
* Conversely, if either
* there is a proper interval I = (x_1,x_2) such that for all x_0∈ I, T(x_0) = y or
* for some x_0, we have T^+(T(x_0)) > x_0, then with y := T(x_0), or
* for some x_0, we have T^-(T(x_0)) < x_0, then with y := T(x_0),
then
T^+(y) > T^-(y)
* For any given x, the following two statements are equivalent:
* T≡ y on a proper interval (x_1,x_2).
* T^+(y) > T^-(y).
We start by proving <ref>. Let x∈ (T^-(y),T^+(y)), then T^-(y)< x < T^+(y), i.e. by an application of <Ref><ref> and <ref>, y ≤ T(x) ≤ y, which shows that T is indeed constant on the interval (T^-(y),T^+(y)). Now we show maximality. If x > T^+(y), then T(x) > y by <Ref><ref>. Similarly, if x < T^-(y), then T(x) < y by <ref>. This shows that there is no larger half-open interval [a,b) on which T≡ y. The relation (<ref>) is a direct implication of (<ref>) (which we just proved to be true) and lemma <Ref><ref>.
Regarding <ref>: We prove that <ref> implies (<ref>). This is a direct consequence of <Ref><ref>:
Now <ref> also implies (<ref>): By <Ref><ref>, T^-(T(x_0)) ≤ x_0 < T^+(T(x_0)). Similarly for <ref> (via <Ref><ref>).
<ref> follows from a combination from the two other statements.
Let T be nondecreasing.
* If
T(x+) > T(x-)
then for any y∈ (T(x-),T(x+)) = I, both
T^+(y) = x = T^-(y)
and there is no greater interval than I of the same type such that either equality in (<ref>) holds.
* Conversely, if there is a proper interval I = (y_1,y_2) such that either
* for all y∈ I, T^+(y) = x, or
* for all y∈ I, T^-(y) = x, then
T(x+) > T(x-)
* For any given x, the following two statements are equivalent:
* T(x+) > T(x-).
* T^+≡ y ≡ T^- on a proper interval (y_1,y_2).
We start by proving <ref>. Let T(x+) > T(x-). Then for any y∈ (T(x-),T(x+)), we have that for any > 0, T(x-) < y < T(x+), i.e. (using again the relevant statements in <ref>) x-≤ T^-(y) and T^+(y)≤ x+. By letting → 0, we obtain the statement. Maximality is proven similarly: Take y > T(x+), i.e. there exists >0 such that T(x+) < y, and thus T^-(y) ≥ x+, which shows that y is not an element of the set on which T^-≡ x. Since T^+ ≥ T^-, this also proves that y is not an element of the set on which T^+≡ x. Maximality from below is proven in the same way.
We now prove <ref>, assuming <ref>, i.e. T^+(y) ≡ x on (y_1,y_2). For any > 0, T^+(y) < x+, i.e. T(x+) > y for all y∈ (y_1,y_2). Thus, T(x+) ≥ y_2 and T(x+) ≥ y_2. In the same way we prove T(x-) ≤ y_1: For any > 0, x- < T^-(y), i.e. T(x-) < y for all y∈ (y_1,y_2). Thus, T(x-)≤ y_1 and T(x-)≤ y_1. All in all, this proves the statement since T(x-) ≤ y_1 < y_2≤ T(x+). The statement <ref> ⇒ (<ref>) is proven in a similar fashion.
<ref> follows from a combination from the two other statements.
<Ref><ref> and <Ref><ref> are a correction of <cit.>. In fact, it is not true that T(T^+(y)) > T(T^+(y)-) implies T(T^+(y)) > y or that T^+(T(x)) > T^+(T(x)-) implies T^+(T(x)) > x, as Figure <ref> shows: We set x=x_2. Then T(x)=y, and T^+(T(x)) = T^+(y) = x_2 = x, i.e. the first condition in <cit.> holds. But, T(x)=y∈ H(T), since y is a plateau of T. This is in contradiction to the statement of <cit.>.
§ INVERSION STATEMENTS
The remaining statements classify exactly what can be said about T∘ T^± and T^±∘ T under suitable continuity assumptions.
Let X = {x_i} be the (ordered) list of all discontinuities of T, where we denote y_i^+ = T(x_i+) and y_i^- = T(x_i-). Then
T(T^+(y)) = T(x_i), for y∈(y_i^-,y_i^+)
y, for y ∉⋃_i [y_i^-,y_i^+]
T(T^-(y)) = T(x_i), for y∈(y_i^-,y_i^+)
y, for y ∉⋃_i [y_i^-,y_i^+]
Let Y = {y_i} be the (ordered) list of all discontinuities of T^±, where we denote x_i^+ = T^+(y_i) and x_i^- = T^-(y_i). Then
T^+(T(x)) = x_i^+=T^+(y_i), for x∈(x_i^-,x_i^+)
x, for x ∉⋃_i [x_i^-,x_i^+]
T^-(T(x)) = x_i^-=T^-(y_i), for x∈(x_i^-,x_i^+)
x, for x ∉⋃_i [x_i^-,x_i^+]
We show the characterization for T∘ T^+: Let first y∈ (y_i^-,y_i^+). Then <Ref><Ref> proves T^+(y) = x_i = T^-(y), i.e. T(T^+(y)) = T(x_i) = T(T^-(y)). If y∉⋃_i [y_i^-,y_i^+], then T^+(y)∉X, or otherwise T^+(y) = x_j for some j, which would be in contradiction to the maximality of the set (y_j^-, y_j^+) in <Ref><Ref>. Similarly, T^-(y)∉X. Thus T is continuous at T^+(y) and at T^-(y), i.e. T(T^±(y)) = y by virtue of <Ref><Ref>.
Regarding T^+∘ T: If i < j, y_i < y_j and thus x_i^+ = T^+(y_i) ≤ T^-(y_j) = x_j^-, so x_i^+ < x_j^- and thus the intervals (x_i^-,x_i^+) are disjoint from another. Let x∈(x_i^-,x_i^+) = (T^-(y_i),T^+(y_i)). Then by <Ref><ref>, T^+(T(x))=T^+(y_i) = x_i^+ and T^-(T(x)) = T^-(y_i) = x_i^-. On the other hand, let x∉⋃_i [x_i^-,x_i^+]. Then T(x)∉Y, because otherwise T(x)=y_j for some j, and then (x_j^-,x_j^+)= (T^+(y_i-),T^+(y_i)) would not be the greatest interval possible, in contradiction to <Ref><ref>. This means that T^+ is continuous at T(x), and thus (because T^- and T^+ are left- and right-continuous versions of one another, see <Ref><ref>) T^+(T(x)) = T^-(T(x)). By <Ref><ref>, T^+(T(x)) = x = T^-(T(x)).
Note that there is no statement about the edge cases T(T^±(y_i^±)) and T^±(T(x_i^±)). In particular, while T∘ T^+ = T∘ T^- on the two sets considered in the statement of <Ref>, it is entirely possible that, e.g., T(T^+(y_i^+))≠ T(T^-(y_i^+)). The remaining values depend on the type of continuity of T at those edge points. Assuming global (left- or right-)continuity of T allows us to precisely characterize the invertibility interaction between T and T^±, and close the gaps in Lemma <ref>
Let T be nondecreasing and continuous from the right. We denote by X = {x_i} the (ordered) list of all discontinuities of T, i.e. y_i^+ := T(x_i)>T(x_i-) =: y_i^- and T(x) = T(x-) for x∉X. We denote by Y = {y_i} the (ordered) list of plateaus of T^±, i.e. for each y_i there exists a proper (maximal in the set of half-open intervals) interval I_i = [x_i^-, x_i^+) such that T(x)≡ y_i for all x∈ I_i. Then
T(T^+(y)) = y_i^+, for y∈[y_i^-,y_i^+)
y, else
T^+(T(x)) = x_i^+, for x∈[x_i^-,x_i^+)
x, else
Let T be nondecreasing and continuous from the left. We denote by X = {x_i} the (ordered) list of all discontinuities of T, i.e. y_i^+ := T(x_i+)>T(x_i) =: y_i^- and T(x+) = T(x) for x∉X. We denote by Y = {y_i} the (ordered) list of plateaus of T^±, i.e. for each y_i there exists a proper (maximal in the set of half-open intervals) interval I_i = (x_i^-, x_i^+] such that T(x)≡ y_i for all x∈ I_i. Then
T(T^-(y)) = y_i^-, for y∈(y_i^-,y_i^+]
y, else
T^-(T(x)) = x_i^-, for x∈(x_i^-,x_i^+]
x, else
This follows directly from the fact that the concatenation of right-continuous, nondecreasing functions is again right-continuous and non-decreasing (and similarly for left-continuous functions), so we can fill the gaps in our knowledge of, say, T∘ T^+ by taking limits from the right, etc.
§ CONCLUSION
This manuscript tries to unify, organize, generalize, and correct some statements about generalized inverses. Since this is just the latest work in a long succession of notes claiming to do just that, we close with only cautious optimism of having done so successfully.
|
http://arxiv.org/abs/2306.12476v1
|
20230621180002
|
Islands and dynamics at the interface
|
[
"Mir Afrasiar",
"Debarshi Basu",
"Ashish Chandra",
"Vinayak Raj",
"Gautam Sengupta"
] |
hep-th
|
[
"hep-th"
] |
Department of Physics,
Indian Institute of Technology,
Kanpur 208 016, India
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
We investigate a family of models described by two holographic CFT_2s coupled along a shared interface. The bulk dual geometry consists of two AdS_3 spacetimes truncated by a shared Karch-Randall end-of-the-world (EOW) brane. A lower dimensional effective model comprising of JT gravity coupled to two flat CFT_2 baths is subsequently realized by considering small fluctuations on the EOW brane and implementing a partial Randall-Sundrum reduction where the transverse fluctuations of the EOW brane are identified as the dilaton field. We compute the generalized entanglement entropy for bipartite states through the island prescription in the effective lower dimensional picture and obtain precise agreement in the limit of large brane tension with the corresponding doubly holographic computations in the bulk geometry. Furthermore, we obtain the corresponding Page curves for the Hawking radiation in this JT braneworld.
Islands and dynamics at the interface
and Gautam Sengupta
July 31, 2023
=====================================
§ INTRODUCTION
In recent years the remarkable progress towards a possible resolution of the black hole information loss paradox in toy models has garnered intense research focus. This development involved the inclusion of bulk regions termed “islands”, in the entanglement wedge for subsystems in radiation baths at late times <cit.>. The appearance of these islands ensure that the von Neumann entropy of the Hawking radiation follows the Page curve <cit.>. The crucial ingredient in this island formalism is the incorporation of replica wormhole saddles dominant at late times, in the gravitational path integral for the Rényi entanglement entropy. A more natural way to understand this island formalism was provided through the doubly holographic description <cit.> in which the radiation baths are described by holographic CFTs dual to a bulk geometry. The island formula then emerges from the standard holographic characterization of the entanglement entropy through the (H)RT prescription in the corresponding higher dimensional bulk geometry.
The above doubly holographic interpretation of the island formula was explored in <cit.> in the context of an extension of the AdS_3/BCFT_2 <cit.> duality through the inclusion of additional defect conformal matter on the end-of-the-world (EOW) brane. In this defect AdS_3/BCFT_2 framework the equivalence of the quantum corrected RT formula, termed as the defect extremal surface formula, in the 3d bulk geometry with the corresponding island formula in the lower dimensional effective 2d description could be demonstrated. This doubly holographic description in the defect AdS_3/BCFT_2 framework was further investigated for different mixed state entanglement measures <cit.>.
In relation to the above discussion, the Jackiw-Teitelboim (JT) gravity <cit.> coupled to a radiation bath in two dimensions has proved to be an interesting solvable model to study the application of the island formula <cit.>. Recently in <cit.>, the authors have realised this setup through a dimensional reduction of a defect AdS_3 bulk with small transverse fluctuations on the EOW brane.[The authors in <cit.> had also obtained the JT gravity from Karch-Randall branes in the context of wedge holography, through a similar prescription.] [Note that, the authors in <cit.> had investigated a related prescription to obtain JT black holes through a similar partial dimensional reduction starting from AdS_3 geometries.] In particular, they have derived the full JT gravity action through a partial dimensional reduction of the 3d bulk wedge sandwiched between a virtual zero tension brane and the finite tension EOW brane, by identifying the transverse fluctuations with the dilaton field in the 2d effective description. Usual AdS_3/CFT_2 prescription has been utilized in the remaining part of the bulk to obtain the bath CFT_2 on the asymptotic boundary of the AdS_3 geometries. This has provided a 3d holographic dual for the JT gravity coupled to a CFT_2 bath.
On a separate note, the doubly holographic description of the island formula has been further investigated in <cit.> for an interface CFT (ICFT_2) where two CFT_2s on half lines with different central charges were considered to be communicating through a common quantum dot. The holographic bulk for such a field theoretic configuration is described by two truncated AdS_3 geometries with different length scales, sewed together along the constant tension EOW brane. In such a configuration, the equivalence between the island formula and the holographic entanglement entropy has been illustrated for certain bipartite states.
In this context, lifting the constraint of the rigidity of the EOW brane in the ICFT_2 setup could lead to interesting physics. In the present article this configuration has been investigated where we introduce transverse fluctuations on the EOW brane to obtain the JT gravity through partial dimensional reduction. In the lower dimensional 2d effective perspective, this configuration is described by a JT black hole coupled to two CFT_2 baths termed CFT^I and CFT^II. In particular, we have two separate CFT_2s in the JT background, interacting through the gravity, whereas they remain decoupled on the remaining half lines with fixed geometry. An alternative description of this configuration involves the consideration of the ICFT_2 as a holographic dual of the 3d bulk where the interface degrees of freedom may now be interpreted as an SYK quantum dot. Naturally the three perspectives described above constitute a double holographic description for this model of JT gravity coupled to two CFT baths.
We investigate the entanglement entropy for various bipartite states for this model described by subsystems in the two CFT baths coupled to JT gravity at zero and finite temperatures. Interestingly for our model we encounter certain novel island configurations absent in the earlier analysis with a single bath <cit.>. Specifically we demonstrated that there are island contribution from both CFT^I and CFT^II even when the subsystem in question involve only bath degrees of freedom from CFT^II which leads to a modification of the standard island formula involving these induced islands. Our results for the entanglement entropy obtained through the above modified island formula in the 2d effective picture in the large central charge limit, exactly reproduces the 3d bulk computations in the doubly holographic perspective.
As a significant consistency check, we perform a replica wormhole computation for one of the configurations considered above to obtain the position of the conical singularity situated at the boundary of the island region. To this end, we employ the well known conformal welding problem <cit.> to define a coordinate system consistently spanning the complete hybrid manifold consisting of a gravitational part and two non-gravitating baths. Solving this welding problem reproduces the location of the quantum extremal surface obtained through the extremization of the generalized entropy. It is worth emphasising here that the recovery of the quantum extremal surface from the solution of this welding problem does not assume any holography providing a significant non-trivial consistency check of the island formula for this setup.
The rest of the article is organized as follows. In <ref>, we review the basic ingredients of our model, namely, the mechanism of partial dimensional reduction for a defect AdS_3 bulk to obtain the JT gravity coupled to a radiation bath, and the salient features of the ICFT_2 and its holographic dual. Subsequently, in <ref> we derive the JT gravity coupled to two radiation baths through a partial dimensional reduction of two truncated AdS_3 geometries sewed together along a fluctuating EOW brane. Furthermore, we provide a prescription for the modified island formula in such CFT models. In <ref>, we perform the computation for the entanglement entropy for certain configurations in the 2d effective description at zero temperature involving extremal JT black holes. Subsequently, in <ref>, the computation of the entanglement entropy for subsystems at finite temperature which involve eternal JT black holes is performed. In <ref>, we provide the replica wormhole computation for a simple configuration considered earlier and show perfect matching with the island result. Finally in <ref> we summarize our results and present conclusions.
§ REVIEW OF EARLIER LITERATURE
§.§ JT gravity through dimensional reduction
In this subsection we review the mechanism to obtain JT gravity through the dimensional reduction of an AdS_3 geometry truncated by a fluctuating EOW brane <cit.>. For this purpose, consider the defect AdS_3/BCFT_2 scenario where additional degrees-of-freedom are incorporated at the boundary of the BCFT_2 which results in the introduction of defect conformal matter on the EOW brane truncating the AdS_3 spacetime. The gravitational action on the dual bulk manifold 𝒩 to such a defect BCFT_2 defined on the half line x ≥ 0 is given by <cit.>
I = 1/16 π G_N∫_𝒩d^3x √(-g) (R - 2 Λ) + 1/8 π G_N∫_ℚd^2 y√(-h) K + I^CFT_ℚ ,
where h_ab is the induced metric and K is the trace of the extrinsic curvature K_ab of the EOW brane denoted as ℚ. The Neumann boundary condition describing the embedding of the EOW brane ℚ with the defect conformal matter is given by
K_ab - h_ab K = 8 π G_N T_ab ,
where T_ab = - 2/√(-h)δ I^CFT_ℚ/δ h^ab is the stress energy tensor for the defect CFT_2. The authors in <cit.> considered the matter action to be of the specific form given by
I^CFT_ℚ = - 1/8 π G_N∫_ℚd^2 y √(-h) T ,
where T denotes the brane tension.
A convenient set of coordinates to describe the 3d bulk geometry are (t, ρ, y) for which the AdS_3 spacetime is foliated by AdS_2 slices and the metric is given by
ds^2 = dρ^2 + L^2 cosh^2 ( ρ/L) -d t^2 + d y^2/y^2 ,
where L is the AdS_3 radius. The constant tension T of the brane in these coordinates may then be obtained to be
T = tanhρ_0/L/L
where ρ_0 is the location of the brane ℚ.
The EOW brane ℚ is now made dynamical by introducing a coordinate dependent perturbation of the form <cit.>
ρ = ρ_0 + ρ̃ ,
where ρ̃ is a small fluctuation such that ρ̃/ρ_0≪ 1. For the specific form of the metric in <ref>, it is possible to integrate out the ρ direction in the bulk for the wedge region 𝒩_1 + Ñ as shown in <ref>. Dimensional reduction for the region 𝒩_2 in the ρ direction will give the original CFT_2 on the asymptotic boundary of the AdS_3 bulk through the usual AdS/CFT correspondence.
Now performing this partial dimensional reduction for the bulk gravitational action given in <ref> for the wedge region 𝒩_1 + Ñ, one may obtain the action for the 2d effective theory as follows <cit.>
I_2d = ρ_0/16 π G_N∫_ℚd^2 y √(-g^(2)) R^(2) + ρ_0/16 π G_N∫_ℚd^2 y √(-g^(2))ρ̃/ρ_0( R^(2) + 2/L^2 cosh^2 ( ρ_0/L) ) + … ,
where g_ab^(2) describes the AdS_2 metric with the length scale L cosh( ρ_0/L), R^(2) is the scalar curvature corresponding to g_ab^(2) and ellipsis denote 𝒪( ρ̃^2/ρ_0^2) terms in the perturbative expansion. The 3d Newton's constant G_N is related to that in the 2d effective theory G_N^(2) as follows <cit.>
1/G_N^(2) = ρ_0/G_N .
It should be noted here that in <ref>, the tension of the EOW brane is considered to be the same as in <ref> as the fluctuation (<ref>) only changes the tension up to 𝒪( ρ̃^3/ρ_0^3). Remarkably, <ref> describes the JT gravity action modulo certain boundary terms[For the recovery of the complete JT action (including the boundary term), see <cit.>.] on identification of ρ̃/ρ_0 with the dilaton field. This provides us with a mechanism for obtaining JT gravity as a 2d effective theory through the partial dimensional reduction of an AdS_3 geometry with a fluctuating EOW brane.
§.§ Interface CFT
In this subsection, we review a class of interface CFT_2s (ICFT_2s) introduced in <cit.>. Their construction involves two CFT_2s defined on half lines coupled through a quantum dot. The bulk dual for such a theory is described by two locally AdS_3 geometries separated by a permeable EOW brane. The two CFT_2s located at the asymptotic boundary of the AdS_3 geometries are labelled as CFT^I and CFT^II with central charges c_I and c_II respectively, and the corresponding dual bulk locally AdS_3 geometries are labelled as AdS^I and AdS^II with length scales L_I and L_II respectively. In the semi-classical approximation, there is also an intermediate 2d effective perspective to describe this configuration, which may be obtained by integrating out bulk degrees of freedom. This results in the brane being characterized by a weakly gravitating system coupled to the original CFT^I,IIs. This 2d effective perspective will be discussed in detail in <ref> in the context of the JT gravity on the EOW brane.
The action for the dual bulk geometry describing the above configuration is given by <cit.>
I = 1/16 π G_N[ ∫_ℬ_Id^3x √(-g_I)(R_I + 2/L_I^2) + ∫_ℬ_IId^3x √(-g_II)(R_II + 2/L_II^2) ]
+ 1/8 π G_N[ ∫_Σd^2 y √(-h) ( K_I - K_II ) -2 T ∫_Σd^2 y √(-h)] ,
where h_ab is the induced metric and T is the tension of the EOW brane Σ. The relative minus sign between the two extrinsic curvatures K_I,II is due to the fact that the outward normal is always taken to be pointing from the AdS^I to the AdS^II geometry. The properties of the EOW brane is fixed by requiring it to satisfy certain junction conditions. The first of these demands that the induced metric h_ab on the brane be the same as viewed from either of the two AdS_3^I,II geometries. The second is the Israel junction condition for the brane with the two AdS_3^I,II geometries on either side which may be expressed as <cit.>
( K_I,ab - K_II,ab) - h_ab( K_I - K_II) = -T h_ab .
Solving these junction conditions will require us to specify the coordinate system describing the 3d geometry. To this end, the AdS_2 foliation of the AdS_3 geometry is chosen again on each patch of the spacetime ℬ_I,II as follows[Here ρ is a hyperbolic angular coordinate which can be related to the usual angle χ as follows
tanh(ρ_k/L_k)≡sinχ^_k .
In the rest of the article, the location of the brane at ρ_k^0 in this coordinate is represented by χ^_k = ψ_k.]
ds_ℬ_k^2 = dρ_k^2 + L_k^2 cosh^2 ( ρ_k/L_k) h̃_ab d y^a d y^b
≡dρ_k^2 + L_k^2 cosh^2 ( ρ_k/L_k) -d t_k^2 + d y_k^2/y_k^2 , k = I,II .
Here h̃_ab describes the usual Poincaré AdS_2 metric with unit radius. In these coordinates, the EOW brane is considered to be located at ρ_k = ρ_k^0 for k = I,II. The first junction condition thus implies the identification of y_k and t_k for both the coordinate patches. Additionally it also enforces the two AdS_2 radii to be the same i.e.,
L_I^2 cosh^2 ( ρ_I^0/L_I) = L_II^2 cosh^2 ( ρ_II^0/L_II) .
The solution to the second junction condition (<ref>) fixes the position of the EOW brane as follows <cit.>
tanh( ρ^0_I/L_I) = L_I/2 T( T^2 + 1/L_I^2 - 1/L_II^2) , tanh( ρ^0_II/L_II) = L_II/2 T( T^2 - 1/L_I^2 + 1/L_II^2) .
Notice from the above that the tension T of the brane has an upper as well as a lower bound. In the large tension limit described by
T → T_max = 1/L_I + 1/L_II ,
the EOW brane approaches the extended asymptotic boundary of both the AdS_k patches. In this limit, integrating out the bulk degrees of freedom on either side results in the two CFT_2s interacting through the weakly gravitating brane. This is the intermediate 2d effective scenario mentioned earlier which will be discussed in detail in the following section in the context of the JT gravity on the EOW brane.
§ REALISING JT GRAVITY AT THE INTERFACE OF TWO SPACETIMES
In this section, we employ a combination of a partial Randall-Sundrum reduction and the usual AdS/CFT correspondence <cit.> to the AdS/ICFT setup described in the preceding subsection, while allowing for small transverse fluctuations of the EOW brane Σ. This procedure results in a two dimensional effective theory comprising of the JT gravity on the EOW brane Σ coupled to two non-gravitating bath CFT_2s. The gravity theory on the brane is obtained by integrating out the bulk AdS_3 geometry near the brane and may be thought of as the “bulk dual” of the interface degrees of freedom. On introducing the transverse fluctuations the locations of the EOW brane is described as follows
Σ : ρ_I=ρ^0_I-ρ̃_I(y)
ρ_II=ρ^0_II+ρ̃_II(y) .
The schematics of the setup is depicted in <ref>. In the above equation, ρ̃_k(y)≪ρ^0_k are the small transverse fluctuations away from the brane angle ρ^0_k. Note that the fluctuation modes are functions of the braneworld coordinates y and are treated as fields on the braneworld, as described in <cit.>. As depicted in <ref>, we may divide the two AdS_3 geometries on either side of the EOW brane, into the wedges W_k^(1) and W_k^(2). With the fluctuations of the brane turned on, the wedge W_II^(1) is extended further to include the small wedge region[Note that the fluctuations of the EOW brane are completely arbitrary in this setting and may as well excise a portion of the wedge W_II^(1) from the AdS_3^II instead.] W̃ which is excised out of the wedge region W_I^(1). Note that, in this setup the AdS_3 spacetimes on either sides of the brane are composed of several wedges as follows
ℬ_I =W_I^(1)+W_I^(2)-W̃ ,
ℬ_II =W_II^(1)+W_II^(2)+W̃ .
We now employ the partial dimensional reduction in the wedge regions W_I^(1)-W̃ in the AdS_3^I and W_II^(1)+W̃ in the AdS_3^II geometries by integrate out the bulk AdS_3 degrees of freedom in the ρ_I,II direction(s). On the other hand, in the wedges W_k^(2), we utilize the standard AdS_3/CFT_2 correspondence which leads to flat non-gravitating CFT_2s on the half lines stretching out from the interface. It is important to note here that in order to perform a perturbative analysis in (ρ̃_k/ρ_k^0), it is required to keep ρ_k^0 large. This restricts us in the large tension regime T→ T_max of the AdS/ICFT setup as advocated in <cit.>. In this limit, the EOW brane Σ is pushed towards the asymptotic boundaries of each AdS_3 geometry and hence the AdS_3 isometries are reminiscent of the conformal transformations on the brane. Therefore, it is natural to expect a gravitational theory coupled to the two bath CFT_2s to emerge in the lower dimensional effective description obtained from the partial dimensional reduction. In the following, we investigate the nature of this gravitational theory by explicitly integrating out the bulk AdS_3 geometries.
The three-dimensional bulk Ricci scalars are related to the 2d Ricci scalar R^(2) on the brane Σ as follows
√(-g_k)R_k=√(-g^(2))[R^(2)-2(3 cosh^2(ρ_k/L_k)-1)/L_k^2cosh^2(ρ_k^0/L_k)] ,
where we have utilized the metric in <ref> with
g^(2)_ab = L_k^2 cosh^2 ( ρ_k/L_k) h̃_ab .
Integrating the 3d bulk Einstein-Hilbert actions (cf. <ref>) of the AdS_3^I,II regions inside the wedges W_I^(1)-W̃ and W_II^(1)+W̃ leads to
1/16π G_N∫_W_I^(1)-W̃d^3x√(-g_I)(R_I + 2/L_I^2)+1/16π G_N∫_W_II^(1)+W̃d^3x√(-g_II)(R_II + 2/L_II^2)
=1/16π G_N∫_Σd^2 y√(-g^(2))[(ρ^0_I-ρ̃_I(y))R^(2)-sinh(2ρ^0_I-2ρ̃_I(y)/L_I)/L_Icosh^2(ρ_I^0/L_I)]
+1/16π G_N∫_Σd^2 y√(-g^(2))[(ρ^0_II+ρ̃_II(y))R^(2)-sinh(2ρ^0_II+2ρ̃_II(y)/L_II)/L_IIcosh^2(ρ_II^0/L_II)] .
Next, we focus on the Gibbons-Hawking boundary terms and the tension term in <ref>. The extrinsic curvatures K_I,II may be computed using the outward normal vector pointing to I→II as follows
K_I,ab=1/L_Itanh[ρ^0_I-ρ̃_I(y)/L_I] h_ab , K_II,ab=-1/L_IItanh[ρ^0_II+ρ̃_II(y)/L_II] h_ab ,
where h_ab is the induced metric on the brane
h_ab=L_k^2 cosh^2 ( ρ_k/L_k)h̃_ab ,
and h̃_ab is as defined in <ref>. We keep the tension of the fluctuating brane constant as given in <ref>, perturbatively in ρ̃_k. This may be interpreted as the tension of the brane remaining intact under small transverse fluctuations. Hence, the Gibbons-Hawking boundary term together with the brane tension term leads to
1/8 π G_N[ ∫_Σd^2 y √(-h) ( K_I - K_II ) -2 T ∫_Σd^2 y √(-h)]
=1/8 π G_N ∫_Σd^2 y√(-g^(2))[sinh(2ρ^0_I-2ρ̃_I(y)/L_II)/L_Icosh^2(ρ_I^0/L_I)-tanh(ρ_I^0/L_I)cosh^2(ρ^0_I-ρ̃_I(y)/L_II)/L_Icosh^2(ρ_I^0/L_I)]
+1/8 π G_N ∫_Σd^2 y√(-g^(2))[sinh(2ρ^0_II+2ρ̃_II(y)/L_II)/L_IIcosh^2(ρ_II^0/L_II)-tanh(ρ_II^0/L_II)cosh^2(ρ^0_II+ρ̃_II(y)/L_II)/L_IIcosh^2(ρ_II^0/L_II)] .
Adding the contributions from <ref> and expanding perturbatively in small (ρ̃_k/ρ_k^0) the total bulk action for the lower dimensional effective gravitational theory on the brane Σ, upon partial dimensional reduction on the wedges W_I^(1)-W̃ and W_II^(1)+W̃, becomes
I_total = ρ^0_I + ρ^0_II/16 π G_N∫_Σd^2 y √(-g^(2)) R^(2) - 1/16 π G_N∫_Σd^2 y √(-g^(2)) ρ̃_I(y) [ R^(2) + 2/L_I^2 cosh^2( ρ_I^0 /L_I)]
+ 1/16 π G_N∫_Σd^2 y √(-g^(2)) ρ̃_II(y) [ R^(2) + 2/L_II^2 cosh^2( ρ_II^0 /L_II)] ,
where we have neglected terms of order (ρ̃_k/ρ_k^0)^2. Utilizing <ref>, the above action may be rewritten in the instructive form
I_total = 1/16 π G_N^(2)[∫_Σd^2 y √(-g^(2)) R^(2)+∫_Σd^2 y √(-g^(2)) Φ(y)(R^(2)+2/ℓ_eff^2)] ,
where we have defined the two dimensional Newton's constant G_N^(2) and the curvature scale ℓ_eff on the brane Σ as follows
1/G_N^(2)=ρ^0_I + ρ^0_II/G_N , ℓ_eff=L_Icosh( ρ_I^0/L_I) = L_IIcosh( ρ_II^0/L_II) .
Furthermore, in <ref>, we have identified the dilaton field Φ(y) on the brane with the fluctuations of the brane angles ρ̃_k(y) as follows
Φ(y)=ρ̃_II(y)-ρ̃_I(y)/ρ^0_I+ρ^0_II .
With these identifications, the 2d bulk action in <ref> precisely takes the form of the action for JT gravity modulo certain boundary terms, with the topological part of the dilaton field Φ_0 set equal to unity. Furthermore, variation of the action with respect to the dilaton field Φ(y) leads to the Ricci scalar as
R^(2)=-2/ℓ_eff^2=-2/L_k^2 cosh^2 ( ρ_k^0/L_k) , k=I,II
which correctly conforms to the fact that the brane is situated at a particular AdS_2 slice as seen from either of the bulk AdS_3 spacetimes.
At this point we recall that, in the limit of large ρ_k^0 the EOW brane Σ is pushed towards the asymptotic boundary of each AdS_3 spacetime[In this limit the tension of the brane is also large, T→ T_max, as described in <cit.>.]. As described in <cit.>, in this limit one obtains a non-local action <cit.> instead of the first term in <ref> as follows
I_non-local=∑_k=I , IIL_k/32π G_N ∫_Σd^2 y √(-h̃) [R^(2)-R^(2)log(-L^2_k/2R^(2))] .
By introducing two auxiliary scalar fields φ_k (k=I , II), the above mentioned non-local action may be rewritten in a local form in terms of the usual Polyakov action[Note that a similar Polyakov action was obtained via covariantization of the induced Liouville action for the gravity theory in the Island/BCFT correspondence described in <cit.>.] as discussed in <cit.>
I_Poly=∑_k=I , II L_k/32π G_N∫_Σd^2 y √(-h̃)[-1/2h̃^ab∇_aφ_k∇_bφ_k+φ_kR^(2)-2/L_k^2 e^-φ_k] .
We may interpret the above Polyakov action as two CFT_2s[As explained in <cit.>, the nature of the bulk quantum matter on the brane becomes conformal in the large ρ_k^0 limit.] with central charges c_I and c_II located on the AdS_2 brane <cit.>. The JT gravity on the brane is coupled to these CFT_2s which are also identical to the two bath CFT_2s on the two half lines obtained via the standard AdS_3/CFT_2 dictionary on the bulk wedges W_k^(2). In other words, we have two CFT_2s defined on the whole real line. In half of the lines the CFT_2s live on a curved AdS_2 manifold that is the brane, and are coupled to each other via the JT gravity on this curved manifold. In the other half, the CFT_2s live on two flat non-gravitating manifolds and hence are decoupled. The schematics of this 2d effective scenario is sketched in <ref>.
To illustrate the emergence of the two CFT_2s on the brane via the Polyakov action in <ref>, we note that the zero-dimensional analogue of the transverse area of a codimension two surface 𝒳 on the brane is given, for the action <ref>, by <cit.>
𝒜(𝒳)=Φ(𝒳)/4G_N^(2)+1/8G_N∑_k=I , IIL_k φ_k(𝒳) .
For the brane Σ situated at the AdS_2 slice described by <ref>, the auxiliary scalar fields φ_k may be obtained as <cit.>
φ_k=log[-2/L_k^2R^(2)]=2 log[cosh(ρ_k^0/L_k)] ,
and hence the area term in <ref> is given by
𝒜(𝒳)=Φ(𝒳)/4G_N^(2)+c_I/6log(1/cosψ_I)+c_II/6log(1/cosψ_II) ,
where we have utilized <ref>.
To conclude we have obtained an effective intermediate braneworld description involving the JT gravity on a dynamical manifold coupled with two bath CFT_2s through a dimensional reduction of a 3d bulk which could be understood as a doubly holographic description for the effective 2d theory. Recall that this 3d bulk has a holographic dual described by an interface CFT where the interface degrees of freedom may be interpreted as an SYK quantum dot.
§.§ Generalized entropy
Consider a QFT coupled to a gravitational theory on an hybrid manifold ℳ=Σ ∪ℳ^I∪ℳ^II, where Σ corresponds to the dynamical EOW brane in the doubly holographic 3d description which smoothly joins with the two non-gravitating flat baths[Note that ℳ^I,II forms part of the asymptotic boundary of the 3d bulk spacetime, ∂ℬ^I,II≡Σ∪ℳ^I,II.] ℳ^I,II. Transparent boundary conditions are imposed at the common boundary of Σ and ℳ^I,II such that the quantum matter fields freely propagate across this boundary. The generalized Rényi entropy for a subsystem A on this hybrid manifold could be obtained through a path integral on the replicated geometry ℳ_n=Σ_n∪ℳ_n^I∪ℳ_n^II with branch cuts at the endpoints of A as follows
(1-n)S^(n)_gen(A)=logTrρ^n_A=logℤ[ℳ_n]/(ℤ[ℳ_1])^n ,
where ρ_A is the reduced density matrix for A in the full quantum theory and ℤ [ℳ_n] corresponds to the partition function of the manifold ℳ_n. Under the semiclassical approximation, the gravitational path integral could be approximated near its saddle point to obtain the partition function on the replicated manifold ℳ_n as follows
𝐙[ℳ_n] ≈ e^-I_grav[Σ_n] 𝐙_mat[ℳ_n] ,
where
𝐙_mat[ℳ_n] in the matter partition function on the entire replicated hybrid manifold ℳ_n while I_grav[Σ_n] is the classical gravitational action on the dynamical manifold Σ_n.
If the replica symmetry for the bulk saddle point configuration in the semiclassical approximation remains intact, the orbifold ℳ̃_n≡ℳ_n/ℤ_n obtained by quotienting via the replica symmetry ℤ_n contains conical defects with deficit angle Δϕ_n = 2 π (1-1/n) along the replica fixed points in the bulk geometry. This is the so-called replica wormhole saddle discussed in the literature <cit.>. The region enclosed between these conical singularities in the bulk constitute the island Is (A) for the subsystem A.
In the semiclassical description, the (normalized) matter partition function 𝐙_mat computes the effective Rényi entropy of the quantum matter fields inside the entanglement wedge of A ∪Is (A) as follows
𝐙_mat[ℳ_n]/(𝐙_mat[ℳ_1])^n = e^(1-n) logTrρ_A ∪Is (A)^n ,
where ρ_A ∪Is (A) is the effective reduced density matrix in the semiclassical description.
Unlike the earlier works where JT gravity was coupled to a single radiation bath, in the current scenario, the presence of two baths modifies the structure of the dominant replica wormhole saddle to provide two independent mechanisms for the origin of the island region in the semiclassical description:
* For a subsystem A=A^I∪ A^II with A^I,II⊂ℳ^I,II in the radiation baths, both A^I and A^II are responsible for the conical singularities appearing in the gravitating manifold Σ. In this situation, the corresponding island region Is(A) manifested in Σ depends upon the degrees of freedom for both the CFT baths. In other words, if we denote the islands corresponding to the individual baths as Is^I,II(A), for the present configuration we have Is^I(A)=Is^II(A)≡Is(A) and the density matrix in the effective theory factorizes in the following way
ρ_A ∪Is (A)∼ρ_A^I∪Is(A)⊗ρ_A^II∪Is (A) .
This could also be understood through the doubly holographic formalism where we have gravitational regions on either sides of the fluctuating EOW brane. Recall that in the doubly holographic description, the island region in this scenario is described by the region on the EOW brane between the two RTs crossing from AdS^I to AdS^II. For the present configuration the bulk RT surface homologous to the subsystem A is composed of two geodesics connecting the endpoints of A^I and A^II, each of which crosses the EOW brane only once as depicted in <ref>. This corresponds to the conventional origin of the island region as described in <cit.>.
* On the other hand, consider a subsystem A residing entirely in the bath ℳ^II. If the central charge of the CFT^II is larger than that of the CFT^I, depending upon the size of the subsystem A, conical singularities in the gravitating region Σ may appear solely due to the presence of A in the bath ℳ^II. Since the bulk region Σ is common between the bath CFTs, the CFT^I degrees of freedom present in Σ sense the same conical singularities and conceive an induced island which we denote by Is^(I\II)(A) to indicate that we obtain an island region in CFT^I given some subsystem A in CFT^II. In this case, the density matrix in the effective theory reduces to
ρ_A ∪Is (A)∼ρ_Is^(I\II)(A)⊗ρ_A^II∪Is (A) .
From the doubly holographic perspective, this corresponds to a double-crossing geodesic where the minimal curve penetrates into AdS^I and returns to AdS^II in order to satisfy the homology condition which is depicted in <ref>. Note that such an island region is a novelty of the present model where a gravitational theory is coupled to two flat baths.
Assuming that the backreactions from the conical defects are small, the replica wormhole saddle are still solutions to the Einstein's equations. In such case, the classical gravitational action I_grav [Σ_n] in <ref>, for the replica wormhole saddle <cit.> may be expressed for n ∼ 1 as
I_grav[Σ_n] ≈ n I_grav[Σ̃_1] + n-1/4 G_N^(2)𝒜(∂Is(A)) ,
where Σ̃_n ≡Σ_n/ℤ_n is the orbifold for the replicated bulk geometry Σ_n and 𝒜(∂Is(A)) is the area of the boundary of the island region Is(A), namely the quantum extremal surface for the subsystem A. Utilizing <ref>, we may now obtain the generalized entropy corresponding to the reduced density matrix ρ_A for the subsystem A as follows
S_gen(ρ_A) =𝒜(∂Is(A))/4G_N^(2)+S_eff(ρ_A ∪Is (A))_ℳ^I∪ℳ^II∪Σ
=𝒜(∂Is(A))/4G_N^(2)+S_eff^I(ρ_A^I∪Is(A))_ℳ^I∪Σ+S_eff^II(ρ_A^II∪Is (A))_ℳ^II∪Σ ,
for the conventional island. Note that the subscripts ℳ^I,II∪Σ denote that the reduced density matrices ρ_A^I,II∪Is(A) in the effective theory have support on corresponding manifolds. On the other hand, for the configuration where we observe the induced island, the generalized entropy modifies to
S_gen(ρ_A)=𝒜(∂Is(A))/4G_N^(2)+S_eff^I(ρ_Is^(I\II)(A))_Σ+S_eff^II(ρ_A^II∪Is (A))_ℳ^II∪Σ .
Note that the area term in <ref> for the generalized entropies are as given in <ref>.
§ ISLANDS IN EXTREMAL JT BLACK HOLES
In this section, we will compute the entanglement entropies of various subsystems at a zero temperature in the CFT_2^I and CFT_2^II baths in the braneworld setup discussed above. In particular, we will compute the entanglement entropy for the corresponding subsystems in the intermediate picture using the island formula. Subsequently, we will substantiate these field theory results from the bulk computation of the RT surfaces corresponding to the subsystem using double holography in the large tension limit in which the gravity on the brane is weakly coupled.
As described in <cit.>, the dilaton profiles may be obtained from the equation of motion which arises from the JT action in <ref> by varying it with respect to the metric for the case of extremal black hole as follows
ds^2=4 dζ_k dζ_k/(ζ_k+ζ_k)^2 , Φ=Φ_0-2Φ_r/ζ_k+ζ_k ,
where ζ = x + i t_E are the planar coordinates and Φ_0 is the topological contribution to the dilaton given in <ref>.
§.§ Semi-infinite subsystem
We consider the case where a subsystem A is comprised of a semi-infinite interval in each bath CFT_2s as A ≡ [σ_1,∞]_I∪ [σ_2,∞]_II. We describe the computation of the entanglement entropy of the subsystem A using the island prescription in the effective 2d description discussed in <ref>. Later we utilize the Ryu-Takayanagi (RT) prescription <cit.> to compute the entanglement entropy of the corresponding interval in the doubly holographic framework.
§.§ Effective 2d description
For this configuration involving a semi-infinite subsystem, only the conventional island appears. Consider the QES to be located at -a on the EOW brane. Note that both the CFT_2^I and CFT_2^II are located on the JT brane, thus as discussed in <ref>, the conical singularity at the QES -a is present in both CFT_2^I and CFT_2^II. As can be inferred from <ref>, the UV cutoff on the JT brane has position dependence as ϵ(-a)=a. Hence, utilizing <ref>, the generalized entanglement entropy for subsystem A may be obtained as[In the following we will set 4 G_N^(2)=1 for brevity.]
S_gen=Φ_r/a+c_I/6log[1/cosψ_I]+c_II/6log[1/cosψ_II]+c_I/6log[(σ _1+a)^2/ϵ a]+c_II/6log[(σ _2+a)^2/ϵ a] ,
where we have used <ref> for the area of the quantum extremal surface located on the JT brane. The entanglement entropy may now be obtained through the extremization of the above generalized entropy over the position of the island surface. The extremization for arbitrary σ_1 and σ_2, however leads to complicated expressions. Thus for simplicity, we assume the symmetric case σ_1=σ_2=σ, for which the extremization equation is given by
∂_a S_gen=0 ⇒ a (c_I+c_II) (a-σ )-6 (a+σ ) Φ_r =0 .
Finally, the location of the island region a^* may be obtained from the above quadratic equation as follows
a^*=(c_I+c_II) σ +6 Φ_r+√(((c_I+c_II) σ +6 Φ_r)^2+24 (c_I+c_II) σΦ_r)/2 (c_I+c_II) ,
where we have disregarded the unphysical solution of the QES. The fine-grained entropy for the subsystem A may finally be obtained by substituting the above extremal value in <ref>. In order to compare this result with the doubly holographic computation in the following subsection, we need to consider the large tension limit of the JT brane for which the brane angles ψ_I,II may be expanded as <cit.>
ψ_I=π/2-L_I/L_I+L_IIδ , ψ_II=π/2-L_II/L_I+L_IIδ , with δ→ 0 ,
where the finite but small δ describes the deviation of the JT brane from the extended conformal boundary of the AdS_3^I,II.
§.§ Doubly holographic description
In this subsection, we substantiate the above island results in the effective field theory from a doubly holographic perspective. To this end, the metric described in <ref> may be mapped to the Poincaré AdS_3 geometry through the following coordinate transformations
z_k= ycosχ_k , x_k= y sinχ_k
.
The radial direction in the xz-plane in the Poincaré coordinates is described by the coordinate y. At the asymptotic boundary described by χ_k=±π/2, y now serves as a boundary coordinate. Furthermore, the length of a geodesic between points (t,x,z) and (t',x',z') in the Poincaré coordinates, is obtained through
d=L cosh^-1[-(t-t')^2+(x-x')^2+z^2+z'^2/2 z z'] .
Note that for the present configuration, the RT surface homologous to subsystem A consists of two semi-circular geodesic segments in each of the AdS_3^I,II geometries which are smoothly joined at the EOW brane as depicted in <ref>. As a consequence of the Israel junction condition, we may choose the common point on the EOW brane to be parametrized by a single variable y. The total length of the RT surface may then be expressed as
d= L_Ilog[(σ _1+y sinψ _I)^2+( y cosψ_I)^2/ϵ y cosψ_ I] + L_IIlog[(σ _2+y sinψ_II)^2+(y cosψ _II)^2/ϵ y cosψ _II] ,
Subsequently, we perturb the EOW brane by introducing a small fluctuation in the brane angles ψ _I,II as follows
ψ _k (y)→sin^-1[tanh( ρ_k^0+(-1)^kρ̃_k(y)/L_k)] ,
where ρ̃_I,II/ρ_I,II^0≪ 1. Utilizing the above relation, <ref> reduces to
d =L_Ilog[ 2 σ _1 y tanh(ρ_I^0/L_I)+σ _1^2+y^2/ϵ y (ρ_I^0/L_I)]+L_IIlog[ 2 σ _2 y tanh(ρ_II^0/L_II)+σ _2^2+y^2/ϵ y (ρ_II^0/L_II)]
-ρ̃_ I(σ _1^2 tanh(ρ_I^0/L_I)+y^2 tanh(ρ_I^0/L_I)+2 σ _1 y)/2 σ _1 y tanh(ρ_I^0/L_I)+σ _1^2+y^2 +ρ̃_ II(σ _2^2 tanh(ρ_II^0/L_II)+y^2 tanh(ρ_II^0/L_II)+2 σ _2 y)/2 σ _2 y tanh(ρ_II^0/L_II)+σ _2^2+y^2 .
where we have considered terms up to the first order in ρ̃_ I,II. Note that, the perturbative parameters ρ̃_ I,II are functions of the island location y. Therefore, in the perturbative terms of the above expression, we replace y with its zeroth order solution in ρ̃_I,II such that the geodesic length in <ref> contains terms truly upto the first order in ρ̃_I,II. Subsequently, on identifying the dilaton as given in <ref>, the candidate entanglement entropy may be obtained as follows
S_ single(σ,y) = Φ_r/y + L_I/4 G_Nlog[σ^2+y^2+2 σ y sinψ_ I/ϵ y cosψ_ I]+ L_II/4 G_Nlog[σ^2+y^2+2 σ y sinψ_ II/ϵ y cosψ_ II],
where we have considered σ_1=σ_2=σ for simplicity. Now, to obtain the holographic entanglement entropy, we extremize the above with respect to y to obtain
L_II y (y-σ ) (y+σ ) [(y^2+σ ^2) (cosψ _I ψ _II+1)+2 σ y sin (ψ _I+ψ _II) ψ _II]
-4 G_N Φ _r (y^2+σ ^2+2 σ y sinψ _I) (y^2+σ ^2+2 σ y sinψ _II)=0 .
The entanglement entropy for the semi-infinite subsystem in question may be obtained by substituting the physical solution for y in <ref>. Finally, in the large tension limit described in <ref>, this matches exactly with the corresponding entanglement entropy computed in the effective 2d theory on utilization of the Brown-Henneaux formula c_I,II=3 L_I,II/2 G_N <cit.>.
§.§ Finite subsystem
In this subsection, we obtain the entanglement entropy for a finite sized subsystem A ≡ [σ_1,σ_2]_I∪ [σ_1,σ_2]_II located in the baths CFT_2 ^I and CFT_2 ^II. Here, we observe three non-trivial phases for the generalized entanglement entropy depending upon the sizes of the subsystem A as depicted in <ref>. In this context, we first utilize the effective 2d prescription to compute the generalized entanglement entropy for the corresponding subsystem in these scenarios. Subsequently, we provide a doubly holographic characterization of the entanglement entropy for the three cases using the RT prescription which substantiates the corresponding field theory results.
§.§.§ Phase - I
§.§ Effective 2d description
We begin with the computation of the generalized entanglement entropy for the phase where the intervals [σ_1,σ_2]_I and [σ_1,σ_2]_II are small such that no island region is observed as depicted in <ref>. For this configuration the area term in generalized entropy vanishes and the expression for the entanglement entropy may trivially be obtained to be
S_A = (c_I+c_II)/3log[σ_2-σ_1/ϵ] .
§.§ Doubly holographic description
From the doubly holographic perspective, it may be observed that the RT surfaces for the intervals [σ_1,σ_2]_I,II in CFT_2^I,II are described by the usual dome-shaped geodesics each in the dual bulk AdS_3^I,II geometries as depicted in <ref>. The entanglement entropy for this configuration may then be obtained to be
S_ dome = (L_I+L_II)/4G_Nlog[σ_2-σ_1/ϵ] ,
which matches identically with the corresponding expression obtained in the effective 2d description in <ref> through the utilization of the Brown-Henneaux formula.
§.§.§ Phase - II
§.§ Effective 2d description
We now discuss the next phase where the sizes of the intervals in the CFT_I,II are increased such that we now observe an island region on the JT brane described by [-a_2,-a_1]_I,II. Note that this configuration corresponds to the conventional origin of the island as discussed in <ref>. The effective terms of the generalized entanglement entropy given in <ref> for this case may be obtained through the four-point twist correlators in CFT_2^I,II which factorize in the large-c limit in the following way
<𝒯_n(σ_1) 𝒯̅_n(σ_2)𝒯_n(-a_2) 𝒯̅_n(-a_1)>_CFT^k_2 = <𝒯_n(σ_1) 𝒯̅_n(-a_1)>_CFT^k_2<𝒯̅_n(σ_2) 𝒯_n(-a_2)>_CFT^k_2 .
Subsequently, we may express the generalized entropy in the large-tension limit as follows,
S_gen= Φ_r/a_1+c_I/6log[(L_I+L_II/L_I δ)(σ _1+a_1)^2/a_1]+c_II/6log[(L_I+L_II/L_II δ)(σ_1+a_1)^2/a_1]
+Φ_r/a_2+c_I/6log[(L_I+L_II/L_I δ)(σ _2+a_2)^2/a_2]+c_II/6log[(L_I+L_II/L_II δ)(σ_2+a_2)^2/a_2] .
Similar to <ref>, the above may be extremized over the positions of the QES at a_1 and a_2 to obtain expressions similar to <ref> with σ replaced by σ_1 and σ_2 respectively. Finally, the entanglement entropy for this configuration may be obtained by substituting these extremal values of the island locations in the above generalized entropy.
§.§ Doubly holographic description
For this phase, owing to the conventional island, the RT surface homologous to subsystem A is composed of two single-crossing geodesics of the type discussed in <ref> and as depicted in <ref>. Consequently, the candidate entanglement entropy for the present configuration may be expressed as
S_bulk=S_single (σ_1,y_1) + S_single (σ_2,y_2) .
As earlier, to obtain the locations of the common points y_is on the EOW brane, we extremize the above with respect to y_1 and y_2 to obtain equations analogous to <ref> whose physical solutions will lead to the holographic entanglement entropy. Subsequently in the large tension limit described by <ref>, this agrees with the corresponding result in the effective 2d description.
§.§.§ Phase - III
§.§ Effective 2d description
We now proceed to the final phase depicted in <ref> which involves the novel induced islands discussed in <ref>. The location of this island region is described by [a_1,a_2]_I,II on the JT brane for the given subsystem. For this phase, we utilize the generalized entanglement entropy formula in <ref> for the corresponding subsystem A. The presence of induced island results in the factorization of the four-point twist correlators in the effective terms of the generalized entanglement entropy in the following way
<𝒯_n(σ_1) 𝒯̅_n(σ_2)𝒯_n(-a_2) 𝒯̅_n(-a_1)>_CFT^I_2 = <𝒯_n(σ_1) 𝒯̅_n(σ_2)>_CFT^I_2<𝒯_n(-a_1) 𝒯̅_n(-a_2)>_CFT^I_2
<𝒯_n(σ_1) 𝒯̅_n(σ_2)𝒯_n(-a_2) 𝒯̅_n(-a_1)>_CFT^II_2 = <𝒯_n(σ_1) 𝒯̅_n(-a_1)>_CFT^II_2<𝒯_n(σ_2) 𝒯̅_n(-a_2)>_CFT^II_2 .
On utilization of the area term given in <ref> and the position dependent cut-off ϵ(y) on the JT brane, the generalized entanglement entropy in this case may then be expressed as
S_gen=Φ_r/a_1+Φ_r/a_2+c_I/3log[1/cosψ_I]+c_II/3log[1/cosψ_II]+c_I/6log[(σ_2-σ_1)/ϵ]
+c_I/6log[(a_1-a_2)^2/a_1 a_2] +c_II/6log[(σ_1+a_1)^2/ϵ a_1]+c_II/6log[(σ_2+a_2)^2/ϵ a_2]
.
We now introduce a parameter Θ=σ_2/σ_1 which is motivated from the analysis[Note that the authors <cit.> only described the bulk computation of the entanglement entropy for finite interval located in the CFT_2 ^II.] described in <cit.>. Moreover, one can also establish a similar relation between the QES a_1 and a_2 located on the JT brane as a_2 = κ Θ a_1 where κ is now one of the parameters whose extremal value will minimize the entanglement entropy. In the case of non-perturbed EOW brane where we obtain the usual ICFT_2 setup, it was shown in <cit.> that the current phase is only possible above a certain value of the parameter Θ depending upon the configuration of the EOW brane. Thus in our computations we assume Θ to be large. Finally, we introduce Θ and κ in <ref>, and extremize over the parameters a_1 and κ in the large tension limit to obtain the following relations,
∂_κ S_gen=0 ⇒ a_1 (c_I+c_II) κ+(c_I-c_II) σ _1=0
∂_a_1 S_gen=0 ⇒ a_1 σ _1 (c_IIσ _1+3 (κ+1) Φ_r) - a_1^3 κ c_II +3 a_1^2 κ Φ_r+3 σ _1^2 Φ_r=0 .
Solving the above, the extremal values of the QES are obtained to be
κ ^*= (c_II-c_I) σ_1/(c_I+c_II) a_1^*
a_1^*= (c_I+c_II) σ _1+6 Φ_r+√(((c_I+c_II) σ _1+6 Φ_r)^2+24 (c_II-c_I) σ _1 Φ_r)/2 (c_II-c_I)
where we have only considered the physical solutions of the island surfaces. The fine grained entropy for the subsystem A may now be obtained by substituting the above extremal value in the generalized entropy in <ref>.
§.§ Doubly holographic description
In this subsection we obtain the length of the RT surfaces supported by the finite-sized subsystem [σ_1, σ_2]_ I,II located in the dual CFT_2^ I,II as depicted in <ref>. The interval in CFT_2^I supports the usual boundary anchored dome-shaped geodesic. However, for the interval in CFT_2^II, the extremal curve is composed of three circular segments forming a double-crossing RT saddle as discussed in <ref>. This double-crossing geodesic intersects the EOW brane at y_1 and y_2 which form the boundary of the island region in the effective 2d description.
Utilizing <ref>, the length of the double-crossing geodesic may be obtained as[Note that the transverse fluctuation of the EOW brane requires the brane angles ψ_ I,II (y_i) to be position dependent. ]
d= L_Icosh ^-1[(y_2-y_1)^2 sinψ_I(y_1)sinψ_I(y_2)+(y_1^2+y_2^2) cosψ_I(y_1)cosψ_I(y_2)/2 y_1 y_2 cosψ_I(y_1)cosψ_I(y_2)]
+L_IIlog[(σ _1+y_1 sinψ_II(y_1))^2+(y_1 cosψ_II(y_1))^2/y_2 cosψ_II(y_1)]
+L_IIlog[(σ _2+y_2 sinψ_II(y_2))^2+(y_2 cosψ_II(y_2))^2/y_2 cosψ_II(y_2)] .
We may now introduce the variables Θ=σ_2/σ_1 and y_2 = κ̂ Θ y_1, similar to the effective 2d perspective considered earlier. As advocated in <cit.> in the context of AdS_3/ICFT_2, such double-crossing geodesics are only permissible for large Θs. Consequently, in the large Θ limit of the above length, we obtain
d ≈ L_Ilog[Θ κ̂/cosψ _ I(y_1) cosψ _ I(y_2)] +L_IIlog[σ_1^2+2 y_1 σ_1 sinψ_ II(y_1)+y_1^2/y_1 cosψ _ II(y_1)]
+L_IIlog[Θ (σ_1^2+2 κ̂y_1 σ_1 sinψ_ II(y_2)+κ̂^2 y_1^2)/κ̂ y_1 cosψ _ II(y_2)] .
Next we implement the position dependence of the brane angles ψ_ I,II (y_i) explicitly in the following way
ψ_ I(y_i) →sin ^-1[tanh(ρ^0_ I-ρ̃_ I(y_i)/L_ I)] ,
ψ_ II(y_i) →sin ^-1[tanh(ρ^0_ II+ρ̃_ II(y_i)/L_ II)] .
Expanding <ref> upto the leading order in ρ̃_I,II and identifying the dilaton as in <ref>, we may obtain the corresponding contribution from the double-crossing geodesic as follows
S_double (y_1,y_2)= Φ_r/y_2+Φ_r/y_1+L_I/4 G_Nlog[y_2/y_1 ^2(ψ _ I)]
+L_II/4 G_Nlog[(σ _2^2+2 y_2 σ _2 sinψ _ II+y_2^2) (σ _1^2+2 y_1 σ _1 sinψ _ II+y_1^2)/y_1 y_2 cos ^2ψ _ II] ,
where we have restored the original variables σ_2 and y_2.
Finally the candidate entanglement entropy for finite sized subsystem under consideration may be obtained by including the contribution from the dome-shaped geodesic as follows
S_bulk = S_double (y_1,y_2)+L_I/4 G_Nlog[(σ_2-σ_1)/ϵ ] .
The above may be extremized over the undetermined parameters y_1 and y_2 to obtain
2 y_1 σ _1 sinψ _ II(4 G_N Φ_r sinψ _ I+L_ I y_1) +σ _1^2 (4 G_N Φ_r sin(ψ _ I)+(L_ I+L_ II) y_1)
-y_1^2 ((L_ II-L_ I) y_1-4 G_N Φ_r sinψ _ I)=0 ,
2 y_2 σ _2 sinψ _ II(4 G_N Φ_r sinψ _ I-L_ I y_2) +σ _2^2 (4 G_N Φ_r sinψ _ I-(L_ I-L_ II) y_2)
-y_2^2 ((L_ I+L_ II) y_2-4 G_N Φ_r sinψ _ I)=0 .
Solving the above equations for y_1 and y_2 and substituting the extremal values in <ref> will finally result in the holographic entanglement entropy for the given finite subsystem. Once again, in the large tension limit described in <ref>, we observe that the corresponding entanglement entropy obtained through the effective 2d description is reproduced.
§.§.§ Page curve
We now plot the entanglement entropy for the finite subsystem A under consideration in the dual CFT_2 ^I,IIs with respect to the subsystem size in <ref>. For the given value of parameters, we observe transitions between the three phases discussed above as the subsystem size is increased. Initially when the subsystem is small in size, [sec:finite-size-I]phase-I has the minimum entanglement entropy and is dominant. As we increase the subsystem size, subsequently [sec:finite-size-III]phase-III starts dominating as crossing over to the region with smaller AdS radius AdS_3^I, is more economical for the geodesic. Finally if the subsystem size is further increased, this advantage of double-crossing vanishes as the length of dome-shaped RT surface supported by the interval in CFT_2^I keeps increasing. And ultimately, [sec:finite-size-II]phase-II becomes dominant.
§ ISLANDS OUTSIDE ETERNAL JT BLACK HOLES
In this section, we consider semi-infinite subsystems in thermal CFT_2 baths coupled to an eternal JT black hole. The thermofield double (TFD) state in this case may be constructed through the Euclidean path integral on half of an infinite cylinder <cit.>. The corresponding cylinder geometry may be obtained by applying a series of transformations on the planar ICFT_2 setup described by ζ = x + i t_E. We begin by mapping the flat interface to a circle of length ℓ through the following SL(2,ℝ) transformation
p = 4 ℓ^2 /2 ℓ - ζ - ℓ ,
where p=x̃+it̃_E. The corresponding bulk transformations may be obtained through the Bañados formalism <cit.> as follows
x̃=x -x^2+z^2 -t^2 /2ℓ/1-x/ℓ+x^2 +z^2 -t^2 /4ℓ^2+ℓ , z̃= z/1-x/ℓ +x^2 +z^2 -t^2 /4ℓ^2 , t̃= t/1-x/ℓ +x^2 +z^2 -t^2 /4ℓ^2 .
We further obtain the cylinder geometry via the usual exponential map given by
p=ℓ e^2 π/β q ,
where the coordinate q=u+iv_E describes the cylinder with circumference β. The interface is now mapped to a circle ℜ𝔢 (q) = 0 with the two CFT_2s mapped on either side. The dual bulk theory for the TFD state on this cylinder is then described by an eternal black string spanning two AdS_3 geometries separated by a thin AdS_2 brane. The horizon of the black string crosses the brane and induces a horizon on it. A similar partial dimensional reduction as described in <ref> may now be performed for this 3d bulk to obtain an effective 2d description comprising of two thermal CFT_2 baths coupled to an eternal JT black hole.
In the cylinder coordinates, the metric and the dilaton profile for the eternal JT black hole in AdS_2 are given as follows <cit.>
ds_grav^2=4π^2/β^2dq dq̅/sinh^2(π(q+q̅)/β) ,
Φ=Φ_0-2 πΦ_r /β(π(q+q̅)/β) ,
where Φ_0 is the topological contribution to the dilaton given in <ref>. On the other hand, the metric for the CFT_2 baths may be expressed as
ds_bath^2=1/ϵ^2dq dq̅ .
However, in the following we will employ the planar coordinates p in which the field theory remains in the ground state and the corresponding stress tensor vanishes. The corresponding metrics and the dilaton are given as follows
ds_grav^2=4/(1-|p|^2)^2dp dp̅ , ds_bath^2=β^2/4π^2ϵ^2dp dp̅/|p|^2 ,
Φ=Φ_0+2 πΦ_r /β1+|p|^2/1-|p|^2 .
We will now obtain the fine-grained entanglement entropy for a semi-infinite subsystem in bath CFT_2^I,IIs coupled to an eternal JT black hole. For this case, we observe two phases for the entanglement entropy as depicted in <ref>. Specifically, for the first phase, we do not observe islands and obtain a steadily rising entanglement entropy as the black hole evolves. For the second phase, QES are observed outside the eternal JT black holes, indicating the presence of islands which saturates the entanglement entropy.
§.§ Phase - I
§.§ Effective 2d description
Now we describe the computation of the generalized entanglement entropy in the first phase for the subsystem composed of semi-infinite intervals [P,∞]_I∪[R,∞]_I considered in CFT_2^I bath and [Q,∞]_II∪[S,∞]_II in CFT_2^II bath as depicted in the <ref>. Here the points P, Q have coordinates as (u_0,v)_I,II in the cylinder coordinates and the points R, S are their corresponding TFD copies with coordinates (u_0,-v+iβ/2)_I,II. Note that in this case, the area term in the generalized entanglement entropy formula is vanishing as no island region is observed for this phase. The generalized entropy then involves only the effective term described by two two-point twist correlators and may be obtained to be
S_A= (c_I+c_II)/3log[βcosh(2 π v/β)/π ϵ] .
§.§ Doubly holographic description
The double holographic description for this phase corresponds to the RT surfaces being composed of two Hartman-Maldacena (HM) surfaces stretched between the endpoints of the semi-infinite intervals on the asymptotic boundaries as depicted in <ref>. The endpoints of the intervals are specified in the planer coordinates as x̃_1(q_1^ I) and x̃_1(q_1^ II) which may be obtained via the conformal maps <ref>. Consequently, in this phase the entanglement entropy corresponding to the HM surfaces may be obtained as
S_ bulk = L_I/2G_Nlog[ 2 x̃_1(q_1^ I)/ϵ̃] + L_II/2G_Nlog[ 2 x̃_1(q_1^ II)/ϵ̃]
= (L_I+L_II)/2 G_Nlog[βcosh(2 π v/β)/πϵ] .
Note that the UV cut-offs between the two coordinates are related by ϵ̃ (u,v) = ϵ 2 πℓ/β e^2 π u/β. The above expression matches identically with the result obtained in the effective 2d description.
§.§ Phase - II
§.§ Effective 2d description
Now we describe the second phase for the generalized entropy of semi-infinite subsystems [P,∞]_I∪[R,∞]_I in CFT_2 ^I bath and [Q,∞]_II∪[S,∞]_II in CFT_2 ^II bath. Note that, as earlier, the points P, Q are located at (u_0,v)_I,II in the cylinder coordinates and the points R, S are their corresponding TFD copies. This phase involves a conventional island region bounded by the QES M ≡ (-a,v^_a)_I,II and N ≡ (-a,-v^_a+iβ/2)_I,II on the JT brane leading to area terms in the generalized entanglement entropy. The effective terms in <ref> now involve four two-point twist correlators. The generalized entanglement entropy may then be obtained as
S_gen= 4 πΦ_r /β(2 π a/β)+c_I/3log[1/cosψ_I]+c_II/3log[1/cosψ_II]
+(c_I+c_II)/3log[( e^2 π (-a-v^_a)/β - e^2 π (u_0-v)/β) ( e^2 π (-a+v^_a)/β - e^2 π (u_0+v)/β)/π ϵ/β e^2 π u/β( 1 - e^- 4 π a/β)] .
We first extremize the above over the time v^_a of the QES to obtain the extremal value as
∂_v^_a S_gen=0 ⇒ v^*_a = v .
Subsequently, the extremization of the generalized entropy is performed over the location a of the QES to obtain the following equation
∂_a S_gen=0 ⇒ sinh(π (a-u)/β)/sinh(π (a+u)/β)=12 πΦ_r /β(c_I+c_II)csch(2 π a/β) .
where we have implemented v^*_a = v. The fine grained entanglement entropy for this configuration may be obtained by solving the above for the extremal value a^* and substituting it in <ref>.
§.§ Doubly holographic description
This subsection describes the doubly holographic computation of the entanglement entropy for the semi-infinite intervals in the dual CFT_2 ^I,IIs at a finite temperature as depicted in <ref>. In particular, we compute the lengths of the RT surfaces homologous to the semi-infinite intervals. Similar to the previous case, we perform the computation in the planar coordinates[Note that it is convenient to work with the (x,t) coordinates in this case where we have a planar brane profile.] where the endpoints of the intervals are described by (x_0, t_0)_I,II (and similarly for the TFD copies) in the dual CFT_2 ^I,IIs, whereas the island point on the EOW brane is located at (y, t_y). The length of the RT surface may now be obtained to be
d = 2L_Ilog[(x_0+y sinψ _I)^2+(t_0-t_y)^2+(y cosψ _I)^2/ϵ y cosψ _I]
+2L_IIlog[(x_0+y sinψ _II)^2+(t_0-t_y )^2+(y cosψ _II)^2/ϵ y cosψ _II],
where the factor 2 arises from the symmetry of the TFD state. After introducing transverse fluctuations on the EOW brane and identifying the dilaton, the entanglement entropy may be expressed as
S_ bulk = 2Φ_r/y + L_I/2G_Nlog( x_0^2+2 x_0 y sinψ_I+y^2/ϵ y cosψ_I)
+ L_II/2G_Nlog( x_0^2+2 x_0 y sinψ_II+y^2/ϵ y cosψ_II) ,
where extremization over t_y has been performed to set t_y=t_0. The location of the island y may now be obtained by extremizing the above to obtain <ref> which may be transformed using the maps in <ref> to obtain the corresponding extremization condition for the present scenario. Finally it may be observed that in the large tension limit the corresponding results match in the effective 2d theory.
§.§ Page curve
We now plot the Page curve for the entanglement entropy for the semi-infinite subsystem under consideration in the dual CFT_2 ^I,IIs at a finite temperature in <ref>. We observe that, similar to the conventional scenarios with a single CFT_2 bath, initially [sec:FT-I]phase-I is dominant with a monotonically increasing entanglement entropy and finally the island saddle for [sec:FT-II]phase-II takes over when the entropy gets saturated to a constant value. This is expected as the presence of the additional bath does not affect the radiation process of the JT black hole. It just provides an additional reservoir for the Hawking radiations to be collected.
§ ISLANDS AND REPLICA WORMHOLES : GRAVITY COUPLED WITH TWO BATHS
In this section, we investigate the replica wormhole saddle for the gravitational path integral and reproduce the location of the conical singularity and the entanglement entropy. We first perform the analysis for the effective lower dimensional model obtained from the AdS/ICFT setup by integrating out the bulk degrees of freedom, namely the “brane+bath” picture with topological gravity on the AdS_2 brane. Later on, we will include JT gravity on the brane and obtain the location of the island and the corresponding fine-grained entropy.
The procedure for obtaining the replica wormhole solutions from the boundary curve in two-dimensional gravity coupled to flat bath requires solving the so called conformal welding problem <cit.>. The schematics of the welding problem is sketched in <ref>. Essentially, the problem consists in finding a new Riemann surface out of two regions inside and outside of a disk which are described by different coordinate patches. Consider the regions parametrized by |w|<1 and |v|>1 which are glued together along their boundaries at |v|=|w|=1, where the complex coordinates are described by
v=e^y=e^σ+iτ , w=e^γ+iθ .
It is, in general, impossible to extend the coordinates w or v holomorphically beyond the respective boundary circles. However, by virtue of the Riemann mapping theorem, one can find two holomorphic functions F and G to establish another coordinate system z on a new Riemann surface such that the regions |w|<1 and |v|>1 are holomorphically mapped to the coordinate z. In other words, one requires
z=G(w) , for |w|<1
z=F(v) , for |v|>1
G(e^iθ(τ))=F(e^iτ) , for |v|=|w|=1 .
The problem of finding holomorphic F(v) and G(w) given the boundary mode θ(τ) is termed the conformal welding problem. In the case of two dimensional gravity on a AdS_2 manifold coupled to a flat CFT_2 bath such a welding issue arises naturally <cit.>. In the presence of dynamical gravity, the entanglement entropy for a subsystem is computed through the Lewkowycz-Maldacena procedure by considering an n-fold cover of the original manifold ℳ <cit.>. For a replica symmetric saddle ℳ_n to the gravitational path integral, it is convenient to quotient by the ℤ_n replica symmetry and consider a single manifold ℳ̃_n=ℳ_n/ℤ_n. The orbifold ℳ̃_n essentially describes a disk with conical singularities at which twist operators for the conformal matter theory are inserted. The metric on the interior manifold ℳ̃_n may be described by a complex coordinate w as follows:
ds^2=e^2ρ(w,w̅)dwdw̅ , for |w|<1 .
In a finite temperature configuration with τ∼τ+2π,
in order to join the metric of the quotient manifold of the gravitating region to the flat space outside described by the exterior coordinates v=e^y, it is required to solve the conformal welding problem discussed above. In this case, the boundary mode θ(τ) plays the role of the reparametrization mode in two-dimensional gravity <cit.>.
§.§ Replica wormholes from AdS/ICFT
In this subsection, we focus on the replica wormhole solutions in the framework of AdS/ICFT discussed in <cit.> and briefly reviewed in <ref>. In the effective lower dimensional scenario obtained from integrating out the bulk spacetimes on either side of the brane σ, we have two flat baths attached to the gravitational region on the EOW brane Σ which has a weakly gravitating metric in the large tension limit. There are two CFTs along the flat half lines which extends to the gravitating region where they interact via the weakly fluctuating metric. The schematics of the setup is sketched in <ref>.
As discussed earlier, for a quantum field theory coupled to dynamical gravity on a hybrid manifold, the replica trick to compute the entanglement entropy for a subsystem involves a replication of the original manifold in the replica index n. The normalized partition function 𝐙_n on this replica manifold then computes the entanglement entropy as follows <cit.>
S=-∂_n(log𝐙_n/n)|_n=1 .
The partition function for the gravity region concerns a gravitational path integral which may be solved in the saddle-point approximation in the semi-classical regime by specifying appropriate boundary conditions. These saddles may be characterized by the nature of gluing of the individual replica copies. In particular, two specific choices will be of importance for our purposes, namely the Hawking saddle where the n-copies of the bath(s) are glued cyclically while gravity is filled in each copy individually, and the replica wormhole saddle in which along with the copies of the bath, gravitational regions are dynamically glued together. In these replica wormhole saddles, upon quotienting via the replica symmetry ℤ_n, additional conical singularities dynamically appear at the fixed points of the replica symmetry in the orbifold theory.
The gravitational action on the orbifold Σ̃_n obtained by quotienting the replicated EOW brane Σ_n is given by
-1/nI_grav[Σ̃_n]=∑_k=I , IIL_k/32π G_N∫_Σd^2 y √(-h̃) [R^(2)-R^(2) log(-L^2_k/2R^(2))]
- (1-1/n)∑_i S(w_i) ,
where S(w_i) denotes the contributions from the dynamical conical singularities. In our case, this is just a constant given in <ref> with a vanishing dilaton term.
We choose the complex coordinate w to describe the gravity region inside the disk |w|=1. Furthermore, the baths outside the disk are described by the complex coordinates v_k ,(k=I,II), in the spirit of <ref>. Then the conformal welding problem sketched in <ref> is reduced to the determination of the appropriate boundary mode θ(τ). We consider two semi-infinite intervals in the bath CFT_2^I,IIs as [σ_1,∞]_I and [σ_2,∞]_II and in the replica manifold twist operators are placed at the locations v_I=e^σ_1 and v_II=e^σ_2. Note that for the replica wormhole saddle, a dynamical conical singularity also appears at w=e^-a.
To proceed, we now require the energy flux equation at the interface of the gravitational region and the bath CFTs. The variation of the gravitational action with respect to the boundary mode is vanishing
-1/n δ I_grav=0 .
On the other hand, the variation of the matter partition function 𝐙_mat with respect to the boundary mode leads to the following expression <cit.>
δlog𝐙_mat=i∫dτ∑_k=I , II(T^(k)_yy-T^(k)_y̅y̅)δθ(τ)/θ'(τ)
Utilizing the above equations, the energy flux condition at the boundary may be expressed as follows
i[T^(I)_yy(iτ)-T^(I)_y̅y̅(-iτ)]+i[T^(II)_yy(iτ)-T^(II)_y̅y̅(-iτ)]=0 .
Under the conformal map y→ z=F_k(v_k), the energy momentum tensor transforms as
T^(k)_yy(iτ)→ e^2y[(dF_k(e^y)/dv_k)^2 T^(k)_zz-c_k/24π{F_k(e^y),v_k}] , k=I , II
In the replicated geometry, the uniformization map for the conical singularities is given by z→z̃=z^1/n such that T^(k)_z̃z̃=0 and the energy-momentum tensors for the two CFTs in the z-plane is given by
T^(k)_zz=-c_k/24π(1-1/n^2)1/z^2 .
Therefore, the energy-flux condition in <ref> reduces to
0=∑_k=I , IIi e^2iτc_k[1/2(1-1/n^2)(F_k'(e^iτ)/F_k(e^iτ))^2+{F_k(e^iτ),e^iτ}]+c.c. .
Since the maps F_k depend on the gluing function θ(τ), the above equation is in general hard to solve. However, one may solve it near n=1 as described below.
For n=1, the first term in the parenthesis of <ref> vanishes and the welding is trivial. Therefore, we may conclude that the maps F_k are well approximated near n=1 by Möbius transformations of the form
z=F_k(v_k)=v_k-𝒜/ℬ_k-v_k , 𝒜 = e^-a , ℬ_I = e^σ_1 , ℬ_II = e^σ_2
It is straightforward to verify that these functions indeed map the branch points at -a and σ_1,2 to z=0 and z=∞ respectively. Therefore, the energy flux condition becomes
c_Iℱ_I+c_IIℱ_II=0 ,
where
ℱ_k=i e^2iτ(F_k'(e^iτ)/F_k(e^iτ))^2+c.c.=i e^2iτ(𝒜-ℬ_k)^2/(e^-iτ-𝒜)^2(e^iτ-ℬ_k)^2+c.c.
Now, performing a Fourier transformation in the above equation (restoring the temperature β), the expression for the k=1 mode reads
0 =∫_0^β dτ e^-2π iτ/β(c_Iℱ_I+c_IIℱ_II)
=c_I(sinh[π(a-σ_1)/β]/sinh[π(a+σ_1)/β])+c_II(sinh[π(a-σ_2)/β]/sinh[π(a+σ_2)/β])
In order to compare with the quantum extremal surface condition at zero temperature, we now take the β→∞ limit to obtain
0=c_I(a-σ_1/a+σ_1)+c_II(a-σ_2/a+σ_2) ,
which on solving for a gives
a^*=(c_I-c_II)(σ_1-σ _2)+√(4 (c_I+c_II)^2σ_1 σ_2+(c_I-c_II)^2 (σ_1-σ_2)^2)/2 (c_I+c_II) .
The above expression is identical to the position of the quantum extremal surface obtained through extremizing the generalized entropy in <cit.>.
§.§ Replica wormholes with JT gravity coupled to two baths
With JT gravity on the EOW brane Σ, the energy flux condition at the boundary of the replicated geometry is modified and the conformal welding problem is a bit more involved. The variation of the gravitational action with respect to the boundary mode θ(τ) no longer vanishes since in the case of JT gravity θ(τ) serves as the “boundary graviton” <cit.>. The energy flux condition in the presence of JT gravity on the brane is then modified to <cit.>
∂_τM= i[T^(I)_yy(iτ)-T^(I)_y̅y̅(-iτ)]+i[T^(II)_yy(iτ)-T^(II)_y̅y̅(-iτ)]
where M corresponds to the ADM mass of the gravitational theory which is related to the Schwarzian boundary action.
For the two single intervals [σ_1,∞]_I and [σ_2,∞]_II on CFT_2^I,II baths, a conical singularity appears inside the gravity region on the orbifold theory Σ_n/ℤ_n at a point -a and we need to consider the subsystems [-∞,-a]_I∪ [σ_1,∞]_I and [-∞,-a]_II∪ [σ_2,∞]_II[Note that the local geometry at the point -a on the replica manifold Σ_n is completely smooth.]. Once again, we will work with a finite temperature configuration with β=2π. We may now uniformize the interior conical singularity at w=𝒜=e^-a by utilizing the map
w̃=(w-𝒜/1-𝒜 w)^1/n .
In the w̃ coordinates, the gravity region has the usual hyperbolic disk metric <cit.>
ds^2_in=4dw̃dw̅̃̅/(1-|w̃|^2)^2 .
In these coordinates we may set w̃=e^iθ̃ at the boundary. Now using the Schwarzian composition rules, we may obtain the ADM mass of the spacetime to be <cit.>
Φ_r/4π{e^iθ̃,τ}=Φ_r/4π[{e^iθ,τ}+1/2(1-1/n^2)R(θ)] ,
where the function R(θ) contains the information about the branch point -a as follows
R(θ)=-(1-𝒜^2)^2(∂_τθ)^2/|1-𝒜 e^iθ|^4 .
Now, utilizing <ref>, the energy flux condition at the boundary becomes
-12Φ_r/c ∂_τ[{e^iθ,τ}+1/2(1-1/n^2)R(θ)]
=∑_k=I , IIi e^2iτc_k[1/2(1-1/n^2)(F_k'(e^iτ)/F_k(e^iτ))^2+{F_k(e^iτ),e^iτ}]+c.c.
The above relation is quite complicated as the map F depends implicitly on the gluing function θ(τ). Nevertheless, as earlier, we may solve it near n∼ 1 as follows.
Near n∼ 1, we may expand the boundary mode θ(τ) as follows <cit.>
e^iθ(τ)=e^iτ[1+iδθ(τ)] ,
where δθ(τ) is of order (n-1). Next we use the following relation <cit.>
e^2iτ{F_k,e^iτ}=-1/2(1+H)(δθ”'+δθ') ,
where H is the Hilbert transform[The Hilbert transform is defined through the action
H · e^imτ=-sgn(m)e^imτ , H· 1=0 .
] which projects out the negative frequency modes of δθ. Note that, except for in the Schwarzian term {F_k,e^iτ}, the functions F_k appear with a factor of (n-1) in <ref> and we may keep only up to the zeroth order solutions given in <ref>.
Restoring the temperature dependence utilizing the scaling Φ_r→2πΦ_r/β and Fourier transforming to the k basis, the k=1 mode requires
∫_0^βdτ e^-2π iτ/β[c_Iℱ_I+c_IIℱ_II-24πΦ_r/β∂_τR(τ)]=0
which leads to the condition
c_I(sinh[π(a-σ_1)/β]/sinh[π(a+σ_1)/β])+c_II(sinh[π(a-σ_2)/β]/sinh[π(a+σ_2)/β])=12πΦ_r/βcsch(2π a/β)
In the β→∞ limit, this reduces to a cubic equation in a as follows
c_I(a-σ_1/a+σ_1)+c_II(a-σ_2/a+σ_2)=6Φ_r/a
The above equation is easily solved for a but the solutions are not quite illuminating. Instead, we take the simplifying limit σ_1=σ_2=σ to get the quadratic equation
a (a-σ )-6 Φ_r/(c_I+c_II) (a+σ)=0 .
Solving for the position of the conical singularity a, we obtain
a^*=(c_I+c_II) σ +6 Φ_r+√(((c_I+c_II) σ +6 Φ_r)^2+24 (c_I+c_II) σΦ_r)/2 (c_I+c_II)
which is identical to the position of the QES obtained in <ref>.
§ SUMMARY AND DISCUSSION
In this article, we have investigated the entanglement structure of various bipartite states in a hybrid manifold where a JT gravity is coupled to two non-gravitating CFT_2 baths. To this end, we first construct this hybrid theory through a dimensional reduction of a 3d geometry. The 3d geometry is comprised of a fluctuating EOW brane acting as an interface between two distinct AdS_3 geometries. Performing a partial Randall-Sundrum reduction in the neighbourhood of the fluctuating brane results in JT gravity on the EOW brane. Furthermore, utilizing the usual AdS/CFT correspondence on the remaining wedges of the two AdS_3 geometries leads to two non-gravitating CFT baths on two half lines. In the limit of large brane tension, we obtain the 2d effective theory of JT gravity coupled to conformal matter on the hybrid “brane+baths” manifolds.
Furthermore, we have provided a prescription for computing the generalized Rényi entropy for a subsystem in this hybrid manifold. In particular, for this scenario where the JT gravity is coupled to two CFT_2 baths, the dominant replica wormhole saddle is modified to provide two independent mechanisms to obtain an island region. Other than the conventional origin of the island region where the degrees of freedom for the CFT in the gravitational region is shared by bath CFTs, we also observe cases where island region is captured for CFT^I even though no bath degrees of freedom is considered. We have called such regions as the induced islands, as the subsystem purely in CFT^II induces an island region even for CFT^I.
In the doubly holographic perspective this phenomena corresponds to the double-crossing geodesic where the RT surface crosses from AdS^II to AdS^I and returns to AdS^II.
Subsequently, we obtain the entanglement entropy for subsystems comprised of semi-infinite and finite intervals in CFT_2^I,II coupled to extremal as well as eternal JT black holes. We perform computations from the effective 2d perspective using the generalized entanglement entropy formula and find agreement with the doubly holographic computation in the large tension limit of the EOW brane for all the cases. We also plot Page curves for the different configurations of the subsystems and observe transitions between different phases of the entanglement entropy.
We have also performed the so called conformal welding problem for the replica wormhole saddle in the effective “brane+bath” scenario and obtained the location of the island for semi-infinite subsystems in the baths. To this end, we begin with the lower dimensional effective picture obtained from the AdS/ICFT setup discussed in <cit.> and reproduce the QES result. Subsequently, this is extended to the case with JT gravity on the EOW brane which substantiate the island computations for the corresponding configuration.
There are several future directions to explore. For finite intervals in the baths coupled to JT gravity, the location of the islands may be obtained through the conformal welding problem with the replica wormhole by extending of the analysis in <cit.>. It will be interesting to explore the nature of mixed state entanglement in Hawking radiation from the JT black hole via different entanglement and correlation measures such as the reflected entropy <cit.>, the entanglement negativity <cit.>, the entanglement of purification <cit.> and the balanced partial entanglement <cit.>. Furthermore, our setup can be extended to include holographic models of interface CFTs which involve two interface branes separating three bulk regions <cit.>. A partial dimensional reduction on different bulk wedges would result in two fluctuating JT branes with black holes which interact through the CFT_2 baths on a hybrid seagull-like geometry. This provides yet another exotic model of Hawking radiation which may lead to new insights for the information loss problem.
The work of GS is partially supported by the Dr Jagmohan Garg Chair Professor position at the Indian Institute of Technology, Kanpur.
§.§ Appendix Finite interval ICFT setup
S_gen=L_II/4 G_3 (log((ψ _2) (σ _P+y_P)^2/y_P)+log((ψ _2) (σ _Q+y_Q)^2/y_Q))
+ L_I/4 G_3log( ^2(ψ _1) (y_P-y_Q)^2/y_P y_Q) .
In the last two terms, we have used the Brown-Henneaux formula
L_I/4 G_3[ log((L_I+L_II)^2 (θ k-1)^2/δ ^2 θ k
L_I^2)]+L_II/4 G_3[log(θ(L_I+L_II) (k y_Q+σ _Q)^2/δ k L_II
y_Q)+ log((L_I+L_II) (σ _Q+y_Q)^2/δ L_II
y_Q)]
L_I/4 G_3[ log(θ k (L_I+L_II)^2/δ ^2 L_I^2)]+L_II/4 G_3[log(θ(L_I+L_II) (k y_Q+σ _Q)^2/δ k L_II y_Q)+ log((L_I+L_II) (σ _Q+y_Q)^2/δ L_II y_Q)]
∂_k S_gen=0 ⇒ k (L_I+L_II) y_Q+(L_I-L_II) σ _Q
∂_y_Q S_gen=0 ⇒ L_II(k y_Q^2-σ _Q^2)
y_Q=-(L_I+L_II) σ _Q/L_I-L_II, y_P=(L_II-L_I) σ _P/L_I+L_II
L_I/4 G_3[log((L_I-L_II)^2 σ _P/δ ^2 L_I^2 σ _Q)]+L_II/4 G_3[log(-4 L_IIσ _P/δ L_I-δ L_II)+log(-4 L_IIσ
_Q/δ L_I-δ L_II)]
JHEP
|
http://arxiv.org/abs/2306.04668v2
|
20230607145302
|
SMRVIS: Point cloud extraction from 3-D ultrasound for non-destructive testing
|
[
"Lisa Y. W. Tang"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] |
Versatile Parametric Classes of Covariance Functions that Interlace Anisotropies and Hole Effects
[
July 31, 2023
=================================================================================================
We propose to formulate point cloud extraction from ultrasound volumes as an image segmentation problem. Through this convenient formulation, a quick prototype exploring various variants of the Residual Network, U-Net, and the Squeeze and Excitation Network was developed and evaluated. This report documents the experimental results compiled using a training dataset of five labeled ultrasound volumes and 84 unlabeled volumes that got completed in a two-week period as part of a submission to the open challenge “3D Surface Mesh Estimation for CVPR workshop on Deep Learning in Ultrasound Image Analysis”. Based on external evaluation performed by the challenge's organizers, the framework came first place on the challenge's https://www.cvpr2023-dl-ultrasound.com/Leaderboard. Source code is shared with the research community at a https://github.com/lisatwyw/smrvispublic repository.
Keywords:
Ultrasound volumes; Non-Destructive Testing; Pipe; Manufacture Defects; Attention U-Net; Recurrent-Residual U-Net; Squeeze-Excitation U-Net; U-Net++; W-Net
⌊⌉||
§ INTRODUCTION
As part of a submission to an open challenge entitled “3D Surface Mesh Estimation for Computer Vision and Pattern Recognition workshop on Deep Learning in Ultrasound Image Analysis”, this report presents experimental results compiled using a training dataset of five labeled ultrasound volumes and 85 unlabeled volumes. Due to resource and time constraints <cit.>, we focus on finding a workable solution that can be quickly prototyped and tested. Accordingly, we propose to formulate three-dimensional mesh estimation from ultrasound volumes as an image segmentation problem. Source code is shared with the research community at
<https://github.com/lisatwyw/smrvis>.
We nicknamed this proposed framework as SMRVIS (Surface Mesh Reconstruction Via Image Segmentation) partly to draw an analogy to an older framework called SMRFI <cit.> that formulated shape matching as a feature image registration problem. Unlike this prior work that embedded 2D shapes with feature values and assigned the computed feature values to the nearest voxel of the embedding space, the present framework simply encodes the positions of the reference mesh vertices with values in the range of [0, 1].
Rather than performing segmentation in 3D, our current approach performs a series of segmentations of thin sections by treating overlapping consecutive slices as 3-channel inputs. Our mesh-embedding scheme does not limit us to the deployment of two-dimensional models only. However, as the number of labeled data in this challenge dataset is relatively small (n=5), the choice of two-dimensional models allowed us to adopt a patch-based training approach, which has countless success cases as observed in previous open challenges <cit.>. To this end, we have explored and evaluated W-Net, R2 U-Net, SE U-Net, Attention U-Net, and U-Net++, which are variants of the U-Net <cit.>.
§.§ Background
Nowadays, a vast majority of segmentation frameworks employ an encoder-decoder neural network structure that is popularized by the U-Net architecture <cit.>. In this architecture, network components called skip connections allow data from down-sampling layers to be rerouted back to the up-sampling layers.
Since its initial success, researchers have proposed countless variants of U-Net to enhance its performance <cit.>. For instance, Xia et al. developed the W-Net in 2017 that joins two fully convolutional neural network (CNN) branches with an autoencoder such that the first branch would encode input data into a fuzzy segmentation while the second branch would reconstruct the fuzzy segmentation to input data. Since its proposal, the W-Net is shown to have numerous successful applications such as retinal vessel segmentation <cit.> and generation of Chinese characters <cit.>.
In 2017, Hu et al. further enhanced U-Nets by incorporating the squeeze and excitation blocks at the end of each convolutional block in order to enhance the inter-dependencies between channels. In 2018, Oktay et al. proposed to incorporate an attention module within each skip connection that will drive the overall model to focus on input regions that garner more importance. Around the same time, Zhou et al. proposed the U-Net++ that aggregates features across different scales via its specially designed skip connections while Alom et al. <cit.> proposed the recurrent residual (R2Unet) U-Net, which combines the strengths of recurrent connections and residual networks and was shown to have improved the quality of the feature representations they could produce <cit.>.
In 2022, Kugelman et al. <cit.> conducted a comparative study to benchmark some of the aforementioned U-Net variants in the context of retinal tissue extraction from optical coherence tomography and recommended the adoption of R2-U-Nets.
We adopted in this work their opensource implementation <cit.> that employs the same deconvolutional block for all U-Net variants. Code listing <ref> provides an abstraction of a basic U-Net implementation. An accompanying notebook is available at <https://github.com/lisatwyw/unet_variants/blob/main/tf_U_Net.ipynb>.
[label=lst:unet,caption=Schematics of basic U-Net,language=python,basicstyle=]
from tensorflow.keras.layers import BatchNormalization, Add, Multiply, Concatenate
from tensorflow.keras.layers import Input, ConvND, ConvNDTranspose
from tensorflow.keras.layers import GlobalAveragePoolingND, MaxPoolingND
from tensorflow.keras.models import Model
# above function names are abstractions only (ND instead of 2D or 3D)
def stack_bn_act( x, NF, KS ):
for i, ks in enumerate( KS ):
x = Conv2D( NF, ks, padding='same' )(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def conv_block( x, NF, KS ):
o = Conv2D( NF, kernel_size= KS[0], padding='same' )(x)
o = BatchNormalization()(o)
o = Activation( 'relu' )(o)
o = stack_bn_act(o, NF, KS[1:]) # defined above
return o
def deconv_block( x, NF, KS=2 ):
o = Conv2DTranspose( NF, KS, padding='same' )(x)
o = BatchNormalization()(o)
o = Activation( 'relu' )(o)
return o
# ————– hyperparameters tested in ablation study
AC, BS, NF = 'sigmoid', 8, 16
NX, NY NDIM, n_slices= 224, 224, 2, 3
ks=[(1,1)]*len(nfilters)
inp = Input( (NX, NY, 3) )
o1 = conv_block( inp, NF, ks_s ); p1 = MaxPooling2D()( o1 )
o2 = conv_block( p1, NF*2, ks_s ); p2 = MaxPooling2D()( o2 )
o3 = conv_block( p2, NF*4, ks_s ); p3 = MaxPooling2D()( o3 )
o4 = conv_block( p3, NF*8, ks_s ); p4 = MaxPooling2D()( o4 )
o5 = conv_block( p4, NF*16, ks_s ); p5 = MaxPooling2D()( o5 )
o6 = conv_block( Concatenate()( [deconv_block( o5, NF*8, strides=(2,2) ), o4]), NF*8, ks)
o7 = conv_block( Concatenate()( [deconv_block( o6, NF*4, strides=(2,2) ), o3]), NF*4, ks)
o8 = conv_block( Concatenate()( [deconv_block( o7, NF*2, strides=(2,2) ), o2]), NF*2, ks)
o9 = conv_block( Concatenate()( [deconv_block( o8, NF*1, strides=(2,2) ), o1]), NF*1, ks)
out = Activation( AC )( Conv2D( n_slices, 1 )( o9 ) )
model = Model(inp, outputs=out )
-0ex
[label=lst:att,caption=Select network components highlighted in the main text of Section 1.1,language=python,basicstyle=]
def attention_block( input_block, gate, ks=(1,1) ):
x = Conv2D( NF, ks )(input_block)
x = BatchNormalization()(x)
g = Conv2D( NF, ks )(gate)
g = BatchNormalization()(g)
att_map = Add()( [g, x] )
att_map = Activation( 'relu' )(att_map)
att_map = Conv2D( 1, ks )(att_map)
att_map = Activation( 'sigmoid')(att_map)
x = Multiply()( [input_block, att_map ] )
return x
... # identical to Code Listing 1
o5 = conv_block( p4, NF*16, ks_s ); p5 = MaxPooling2D()( o5 )
c6=attention_block( deconv_block( o5, NF*8, strides=(2,2) ), o4)
c7=attention_block( deconv_block( o6, NF*4, strides=(2,2) ), o3)
c8=attention_block( deconv_block( o7, NF*2, strides=(2,2) ), o2)
c9=attention_block( deconv_block( o8, NF*1, strides=(2,2) ), o1)
o6 = conv_block( Concatenate()( [c6, o4] ), NF*8, ks )
o7 = conv_block( Concatenate()( [c7, o3] ), NF*4, ks )
o8 = conv_block( Concatenate()( [c8, o2] ), NF*2, ks )
o9 = conv_block( Concatenate()( [c9, o1] ), NF*1, ks )
out = Activation( AC )( Conv2D( n_slices, 1 )( o9 ) )
att_model = Model(inp, outputs=out )
-0ex
[label=lst:resunet,caption=Select network components highlighted in the main text of Section 1.1,language=python,basicstyle=]
def residual_block( x, NF, KS, strides=(1,1) ):
x = Conv2D( NF, KS[0], padding='same', strides=strides )(x)
x = BatchNormalization()(x)
x = Activation( 'relu' )(x)
o = stack_bn_act( x, NF, KS[1:], strides=(1,1) ) # one less
x = Add()( [o, x] )
return x
NC=2
ks = [3]*NC
o1 = residual_block( inp, NF*1, ks ); p1= MaxPooling2D( (2,2) )( o1 )
o2 = residual_block( p1, NF*2, ks ); p2= MaxPooling2D( (2,2) )( o2 )
o3 = residual_block( p2, NF*4, ks ); p3= MaxPooling2D( (2,2) )( o3 )
o4 = residual_block( p3, NF*8, ks ); p4= MaxPooling2D( (2,2) )( o4 )
o5 = residual_block( p4, NF*16, ks );
o6 = concatenate( [deconv_block( o5, NF*8, 2, strides=(2,2) ), o4])
o6 = residual_block( o6, NF*8, ks )
o7 = concatenate( [deconv_block( o6, NF*4, 2, strides=(2,2) ), o3])
o7 = residual_block( o7, NF*4, ks )
o8 = concatenate( [deconv_block( o7, NF*2, 2, strides=(2,2) ), o2])
o8 = residual_block( o8, NF*2, ks )
o9 = concatenate( [deconv_block( o8, NF*1, 2, strides=(2,2) ), o1])
o9 = residual_block( o9, NF*1, ks )
... # same as Code Listing 1
-0ex
[label=lst:nn,caption=Select network components highlighted in the main text of Section 1.1,language=python,basicstyle=]
def SE_block( input_block ):
x = GlobalAveragePoolingND()(input_block) # squeeze-step
x = Dense( n_outputs//2, activation='relu')(x)
x = Dense( n_outputs, activation='sigmoid')(x)
x = Mltiply()( [input_block, x ] ) # excite-step
return x
-0ex
§ METHODS
§.§ Materials
A training set of 90 ultrasound volumes were provided by the challenge organizers. Each scan captures piece(s) of a steel pipe, potentially containing artifacts inside these pipes. Corresponding Surface mesh of the pipe (pieces) were created by an “experienced data analyst” <cit.>. Five of these surface meshes (corresponding to volumes 1 to 5) were provided to the challenge participants and herein referred to as reference masks. Figures 1-5 show examples of the surface renderings of these reference meshes. As the reference labels for the remaining 85 volumes were not provided at the time of challenge, these volumes were mainly used in this study as test samples.
§.§ Preprocessing
Each of the reference mesh was first encoded into an image representation. To do so, the vertices of each mesh were read into memory. Next, a binary mask was created to encode the mesh vertices, taking into account the voxel spacing of the ultrasound volumes. To facilitate learning, the point cloud mask was dilated so that the edges of the binary mask softens. An alternative approach would be to apply Gaussian blur <cit.> but we found this simpler approach sufficient and computationally more efficient.
To read in the corresponding ultrasound volume, a meta file was created for each of the raw ultrasound files (example in Sec. <ref>; script is also provided under <https://github.com/lisatwyw/smrvis/blob/main/utils/write_meta.sh>) and Python package SimpleITK was used (example code snippet follows). To ensure that the fitted models would be robust to noise, we only preprocessed the input ultrasound data with two steps: down-sampling the image resolution and rescaling their intensity values to [0,1].
[label=lst:sitk,caption=Code listing for reading data using SimpleITK and Plydata,language=python,basicstyle=]
import SimpleITK as sitk
vols=
i = 1 # sample_id
filename='train_data/training/volumes/scan_
header=sitk.ReadImage(filename)
vols[i]=sitk.GetArrayFromImage(header)
from plyfile import PlyData
plydata,verts,faces=,,
filename='train_data/training/meshes/scan_
vx = plydata[i]['vertex']['x']
vy = plydata[i]['vertex']['y']
vz = plydata[i]['vertex']['z']
verts[i] = [ (vx[d],vy[d],vz[d]) for d in range(len(vx)) ]
num_faces= plydata[i]['face'].count
faces[i] = [ plydata[i]['face'][d][0] for d in range(num_faces) ]
-0ex
[language=python,label=lst:pyvista,basicstyle=,caption=Code listing for reading in mesh data and generating their screenshots with Pyvista.]
import pyvista
# we need to start the frame buffer even if plotting offline
pyvista.start_xvfb()
plotter = pyvista.Plotter(off_screen=True)
mesh = pyvista.read(ply_filename)
plotter.add_mesh( mesh, opacity=.3, color='grey' )
plotter.add_title('Estimated mesh for volume#
plotter.show( screenshot = out + '_screenshot.png')
-0ex
§.§ Models
Based on observations and evidence from prior studies <cit.>, we elected to explore the following variants of the U-Net: W-Net (Wnet), the recurrent-residual U-Net (R2-Unet), Squeeze and Excitation U-Net (SE-Unet), U-Net++, and an Attention U-Net (Att-Unet).
§.§ Training setup
We divided the provided labeled meshes with three non-overlapping subsets: validation set is used for early-stopping using slices from one volume while the test set is constructed from another volume for evaluation of the training progress, with the remaining volumes all used for training. For instance, a trial might involve volume 5 as the test volume, volume 4 as the validation set, and volumes 1-3 as the training set.
§.§.§ Data augmentation
We augmented the training dataset by perturbing the input training set on-the-fly with random translations, rotations, and mirroring on the x- and y-axes. We examined two approaches: applying these different types of transforms simultaneously, or exclusively.
We also explored the effects of a training schedule wherein training volumes that have been reoriented entirely are only presented at a later phase.
§.§.§ Optimization
We employed the Adaptive Moment Estimator (ADAM) optimization algorithm with default parameters and explored values of 0.0001, 0.008, 0.001, and 0.1 as the initial learning rate in the context of three learning rate scheduling schemes, namely, cyclical decay, cosine decay, and polynomial decay <cit.>.
Training was permitted to run for 300 epochs or terminated early when there is no reduction in the metrics computed on the validation set. We empirically explored the use of Dice and Jaccard as the validation metric but found training diverged when these two metrics were used, most likely due to scarcity of mesh vertices in relation to the its enclosing volume. We thus elected to use the same loss function (but computed on the validation set) for determining the termination criteria. Example progress plots are shown in the Appendix.
§.§ Ablation studies conducted
We conducted ablation trials to answer design questions surrounding the following components and present comparisons in the Results section and the Appendix.
* Image resolution (IR): how might the image resolution used to train the models affect accuracy? (According to a 2019 review <cit.>, ultrasound scanners typically acquire data that will be reconstructed to matrix size of 512 × 512 with 256 intensity levels. Hence, we explored the standard resolution of 256 × 256 and the non-standard resolution of 384 × 384, which correspond to down-sampling factors of three and two, respectively);
* Activation function (AC): prior work advocate <cit.> the use of sigmoid, hard sigmoid, and hyperbolic tangent function based on datasets involving magnetic resonance and computed tomography data; do results generalize to ultrasonic data?
* Loss function (LS): previous studies <cit.> have observed inter-plays between the loss function and activations for image classification; we hypothesize that their results may not generalize to ultrasound point extraction and hence explore Dice, binary cross entropy (BCE) and binary focal cross entropy loss terms (BFCE);
* Selection scheme (SE): should all ultrasound sections be presented to the model during training (SE=1) or only sections containing the reference mesh (SE=2) be presented to ease training?
* Random transform (RT): how the type(s) of random transformations affect the training progress? e.g. should all random transforms be allowed or only one type at a time?
* Mesh encoding (EN): how should the mesh vertices be encoded in the image space? Would attenuating boundaries of the surface mesh help training when this encoding strategy is used in conjunction with alternative loss functions such as pixel-wise mean squared difference or absolute difference?
* Number of filters (NF): would reducing the number of filters from the default size of 16 to 8 impact performance severely?
§.§ Evaluation metric and model selection
Following the Challenge's evaluation protocol, we employ Chamfer distance (CD) measure, which is defined as:
CD( 𝒮, 𝒯 )= 1/ |𝒮|∑_x ∈ S min_y ∈ T || x - y ||^2_2 + 1/ |𝒯 |∑_y ∈𝒯 min_x ∈ S || x - y ||^2_2
where 𝒮 and 𝒯 denote source and target point clouds, respectively.
To enable computation without needing a graphics card, we approximated the distance using v=10,000 points randomly drawn from each point cloud as we find the measured Chamfer distance to be relatively stable for this choice of v.
§.§ Mesh surface extraction
We employ Python packages Veko and Pyvista to respectively extract isosurfaces and visualize the extracted isosurfaces for volume rendering of the mesh surfaces (code listing <ref>). Figure 1-5 illustrate reference meshes rendered for volumes 1-5.
§.§ Implementation and deployment details
As mentioned earlier, opensource code for the U-Net variants <cit.> were adapted so that different activation functions could be tested.
All experiments were conducted in a virtual environment with Python 3.10, Tensorflow 2.12 and Torch 1.13.0. Graphical processing unit (GPU) cards explored include NVIDIA Tesla V100, Tesla T4, and P1000-SXM2 (CUDA Version 12.0).
To maximize reproducibility <cit.>, source scripts will be updated on the repository with human- (and machine-) readable instructions
for emulating the computing environment needed to run these scripts in the form of a docker container at <https://github.com/lisatwyw/smrvis/>.
§ RESULTS
When each ultrasound volume of 1281 slices was down-sampled to an axial resolution of 256 × 256, inference with a single GPU with V100 and single-core CPU for instance took less than one minute and 431.2 ±0.9 seconds (about 7 minutes), respectively.
Figure <ref>-Figure <ref> each illustrates the point cloud extracted from the training volumes by the proposed framework and the point clouds provided in the reference mesh files.
Figure <ref> visualizes results in two dimensions, while Figure <ref> presents the isosurface computed from a randomly selected test volume.
We next present quantitative results with Table <ref>-<ref> that generally report the Chamfer distances between the reconstructed and reference point clouds.
More specifically, Table <ref> presents trials that failed to converge while tables <ref>-<ref> present trials that yielded satisfactory Chamfer distances (below 95.0). In generating these tables, samples from select volumes were used as the training and validation set while a left out volume was used as a test (unseen) sample as described in Section <ref>; these are marked in the tables as `(val)' and `(tst)' to denote validation and test set, respectively.
Based on the results from the trials summarized by these tables, volumes 4 and 5 had the lowest and highest Chamfer distance, respectively. This may be explained by Figure <ref>, which shows that volume 5 captured a pipe with an object inside, rendering an obstacle to achieving low Chamfer distances between the extracted and reference point clouds.
Results from Table <ref> suggest that the choice on the encoding schemes of the reference mesh labels did not impact performance in significant ways. Empirical results (Appendix) suggest that taking the average of contiguous slices maybe superior to taking the solution of the most confident slice.
Table <ref> lists the hyper-parameters of the configurations tested for the U-Net variants and example statistics on training convergence.
More results are presented in the Appendix; in short, we did not find significant difference between the following:
* 384 × 384 (IR=2) vs 256 × 256 (IR=3);
* Sigmoid, hard sigmoid, and hyperbolic tangent will lead to slightly different effective sizes <cit.>, which will impact the choice of threshold values;
* We did not find improvement of accuracy due to the inclusion of Dice score nor did we find BFCE to be superior over BCE;
* Use of 8 filters and 16 filters gave comparable results
Conversely, the following affected the success of training:
* Chance of training success was increased when the encoded reference meshes were dilated;
* Chance of training success increased when the second scheme was adopted for sample selection (omit training samples that did not contain the mesh and thus less relevant);
* Simultaneous application of different random transformations may render training more difficult; we found that a progressive scheme of introducing multiple transformations only in a later phase of training to be an effective solution.
In summary, the models did not appear to have over-fitted to the training set as including the hardest sample (volume 5) did not lead to reduction in error. There appears to be no significant difference due to the choice of model,
image resolution, and random transformation schemes (more results presented in the Appendix). We observed that training diverged when reoriented volumes were presented too early during model training.
Mean 0.23
77.4 73.8 71.8 65.9 93.6
————————————–
Max 0.26
77.4 74.0 71.2 66.0 94.8
————————————–
Mean 0.23
78.2 73.7 72.2 66.0 93.3
————————————–
Max 0.26
78.0 73.9 71.8 66.1 93.5
'/home/lisat/scratch/0528/IT4_IV3_EN0_IR3_ARseunet_NF16_KS-1_NR-1_NC2_LSbfce_OPadam_LR0.0005_BS8_WT1.0_CR3_SE2_RT50_DO0.2_NZ3_ACtanh_ST10_SH1'
R2Unet 0.14 82.0 74.8 73.0 65.5 92.1
0.21 81.8 71.4 72.7 65.4 92.2
0.25 82.1 71.2 72.1 65.6 92.2
0.35 NIL 71.7 71.3 65.3 91.1
Attention Unet 0.19 84.0 75.1 75.9 68.4 94.0
0.28 82.1 73.4 72.5 66.3 93.2
0.37 83.2 73.7 70.5 66.1 93.2
§ DISCUSSIONS
While the 2022 benchmark study <cit.> hinted a trade-off between memory requirements, training time, and accuracy between the U-Net variants, our study did not observe an obvious difference between the U-Net variants in terms of accuracy nor training time, potentially due to the small number of labeled volumes.
Nevertheless, the use of eight sets of filter in U-Net++ did not increase error, thereby suggesting room for optimizing the hyper-parameters that would best balance between model widths and depths without compromising accuracy.
Recall that the two key factors that render CNNs computationally expensive include the number of model parameters and the number of multiply-add (MAD) operations required by each. Depth-separable and group convolutions are two strategies to balance between the number of model parameters and the number of MAD operations needed. To optimize both factors, Liu et al. proposed ConvNeXt that employs convolutional blocks composed of depth-separable convolution followed by a network component called inverted bottleneck and point-wise convolutions. Following this line of design thinking, Heinrich and Hagenah <cit.> very recently (2023) propose two orthogonal strategies specifically to lower the computational requirements of the self-configured, supervised learning framework known as the “no new U-Net“ (nnU-Net) <cit.>. Firstly, partial convolution uses “T-shaped” spatial convolution previously proposed by Chen et al. <cit.> to perform spatial convolution only on select channels and subsequently applies point-wise operators on the remaining channels within the inverted bottleneck component. Secondly, re-parameterization allows one to reduce the size of the model after it has been trained. This was made possible by placing batch normalization between the first and second convolution blocks and dropping the use of non-linear activation functions <cit.>. According to the presented experimental results <cit.>, these two strategies led to reduction of model sizes by a factor of 3 to 4 and shorter inference time of about twice as fast when compared to the original version of nnU-Net.
The objectives of the present study did not involving placing restrictions on memory consumption, computational demands, and inference times. Future researchers may further tackle these constraints by exploring and evaluating these latest state-of-art extensions of U-Net <cit.>.
Due to time constraints, a major limitation of the present framework is failure to leverage the 85 unlabeled ultrasound volumes provided by the workshop challenge. Future work will quantify the potential advantages of including pseudo labels into our framework and/or the use of other reinforcement learning techniques. Other strategies that generate point clouds directly from data <cit.> could also be examined.
§ CONCLUSION
In this brief note, we explored the feasibility of surface mesh reconstruction via point cloud estimation as an image to mask generation problem. This initial framework opens door to possibility of leveraging (pretrained) deep and wide networks published in the wild. The source code developed during the course of this experimental prototyping period will be posted at <https://github.com/lisatwyw/smrvis>. We hope the research communities will find this quick prototype consisting of a few Python scripts approachable.
Acknowledgements
We sincerely thank DarkVision Technologies Inc. for provision of the ultrasound dataset and hosting this exciting challenge. The author also expresses deep gratitude to Tong Tsui Shan and Kim Chuen Tang for their support.
1
pointcloud Rao J, Wang J, Kollmannsberger S, Shi J, Fu H, Rank E. Point cloud-based elastic reverse time migration for ultrasonic imaging of components with vertical surfaces. Mechanical Systems and Signal Processing. 2022 Jan 15;163:108144.
sensors Verykokou S, Ioannidis C. An Overview on Image-Based and Scanner-Based 3D Modeling Technologies. Sensors. 2023 Jan;23(2):596.
winner23 Eisenmann M, Reinke A, Weru V, Tizabi MD, Isensee F, Adler TJ, Ali S, Andrearczyk V, Aubreville M, Baid U, Bakas S. Why is the winner the best?. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 19955-19966).
ndtdata Virkkunen I, Koskinen T, Jessen-Juhler O, Rinta-Aho J. Augmented ultrasonic data for machine learning. Journal of Nondestructive Evaluation. 2021 Mar;40:1-1.
viz3d Osipov, L.V., Kulberg, N.S., Leonov, D.V. et al. 3D Ultrasound: Visualization of Volumetric Data. Biomed Eng 54, 149–154 (2020). https://doi.org/10.1007/s10527-020-09993-3
challenge
DarkVision Technologies Inc. Nov 2022. The Ultrasound Dataset Challenge. Retrieved Mar 2023 from https://www.cvpr2023-dl-ultrasound.com/.
smrfi Tang L, Hamarneh G. SMRFI: Shape matching via registration of vector-valued feature images. In 2008 IEEE Conference on Computer Vision and Pattern Recognition 2008 Jun 23 (pp. 1-8). IEEE.
bce Nieradzik L, Scheuermann G, Saur D, Gillmann C. Effect of the output activation function on the probabilities and errors in medical image segmentation. arXiv preprint arXiv:2109.00903. 2021 Sep 2.
activations23 Dubey SR, Singh SK, Chaudhuri BB. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing. 2022 Jul 3.
benchmark22 Gut D, Tabor Z, Szymkowski M, Rozynek M, Kucybała I, Wojciechowski W. Benchmarking of deep architectures for segmentation of medical images. IEEE Transactions on Medical Imaging. 2022 Jun 6;41(11):3231-41.
Kugelman Kugelman J, Allman J, Read SA, Vincent SJ, Tong J, Kalloniatis M, Chen FK, Collins MJ, Alonso-Caneiro D. A comparison of deep learning U-Net architectures for posterior OCT retinal layer segmentation. Scientific Reports. 2022 Sep 1;12(1):14888.
tang2020 Lisa YW Tang, Harvey O Coxson, Stephen Lam,
Jonathon Leipsic, Roger C Tam, and Don D Sin, “Towards large-scale case-finding: training and validation of residual networks for detection of chronic obstructive pulmonary disease using low-dose CT,” The Lancet Digital Health, vol. 2, no. 5, pp. e259–e267, 2020.
retinal Galdran A, Anjos A, Dolz J, Chakor H, Lombaert H, Ayed IB. The little w-Net that could: state-of-the-art retinal vessel segmentation with minimalistic models. arXiv preprint arXiv:2009.01907. 2020 Sep 3.
poly Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017
chinese Jiang H, Yang G, Huang K, Zhang R. W-Net: one-shot arbitrary-style Chinese character generation with deep neural networks. InNeural Information Processing: 25th International Conference, ICONIP 2018, Siem Reap, Cambodia, December 13–16, 2018, Proceedings, Part V 25 2018 (pp. 483-493). Springer International Publishing.
r2unet Alom MZ, Hasan M, Yakopcic C, Taha TM, Asari VK. Recurrent residual convolutional neural network based on u-Net (r2u-Net) for medical image segmentation. arXiv preprint arXiv:1802.06955. 2018 Feb 20.
ResNet He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep residual learning for image recognition." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
wnet Sonal Gore, Ashwin Mohan, Prajakta Bhosale, Prajakta Joshi, Ashley George. Brain Tumor Segmentation Using Deep Neural Networks. 2021 Retrieved May 2023 https://github.com/Brain-Tumor-Segmentation.
docker
Nüst D, Sochat V, Marwick B, Eglen SJ, Head T, Hirst T, Evans BD. Ten simple rules for writing Dockerfiles for reproducible data science. PLoS computational biology. 2020 Nov 10;16(11):e1008316.
SqueezeNet Iandola, Forrest N., Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size." arXiv preprint arXiv:1602.07360 (2016).
loss Ma J, Chen J, Ng M, Huang R, Li Y, Li C, Yang X, Martel AL. Loss odyssey in medical image segmentation. Medical Image Analysis. 2021 Jul 1;71:102035.
nnunet Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
heinrich23 Heinrich MP. Make nnUNets Small Again. InMedical Imaging with Deep Learning, short paper track 2023 Apr 28.
chen23 Jierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, and S-H Gary Chan. Run, don’t walk: Chasing higher flops for faster neural networks. arXiv preprint arXiv:2303.03667, 2023
liu22 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976–11986, 2022
nnunet21 Isensee F, Jäger PF, Full PM, Vollmuth P, Maier-Hein KH. nnU-Net for brain tumor segmentation. InBrainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II 6 2021 (pp. 118-132). Springer International Publishing.
pointe Nichol A, Jun H, Dhariwal P, Mishkin P, Chen M. Point-E: A System for Generating 3D Point Clouds from Complex Prompts. arXiv preprint arXiv:2212.08751. 2022 Dec 16.
§ MORE EXAMPLE VISUALIZATIONS
§ META FILE
[language=python,label=lst:mhd,basicstyle=]
ObjectType = Image
NDims = 3
BinaryData = True
BinaryDataByteOrderMSB = False
CompressedData = False
TransformMatrix = 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0
Offset = 0.0 0.0 0.0
CenterOfRotation = 0.0 0.0 0.0
AnatomicalOrientation = RAI
ElementSpacing = 0.49479 0.49479 0.3125
DimSize = 768 768 1280
ElementType = MET_USHORT
ElementDataFile = scan_001.raw
-0ex
When the ultrasound data is read using a meta header file as demonstrated in Code listing <ref>, users may need to update the values of the Anatomical Orientation tag.
§ TRAINING PROGRESS
Figure <ref> and <ref> show examples of training progress drawn from two randomly selected trials that involved a R2Unet and SE-Unet model.
|
http://arxiv.org/abs/2306.08889v1
|
20230615064546
|
Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models
|
[
"Ishaan Singh Rawal",
"Shantanu Jaiswal",
"Basura Fernando",
"Cheston Tan"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Multi-modal Hate Speech Detection using Machine Learning
Fariha Tahosin Boishakhi
Computer Science and Engineering
BRAC University
Dhaka , Bangladesh
[email protected]
Ponkoj Chandra Shill
Computer Science and Engineering
BRAC University
Dhaka, Bangladesh
[email protected]
Md. Golam Rabiul Alam
Computer Science and Engineering
BRAC University
Dhaka , Bangladesh
[email protected]
July 31, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================
While VideoQA Transformer models demonstrate competitive performance on standard benchmarks, the reasons behind their success remain unclear. Do these models jointly capture and leverage the rich multimodal structures and dynamics from video and text? Or are they merely exploiting shortcuts to achieve high scores? We analyze this with QUAG (QUadrant AveraGe), a lightweight and non-parametric probe that systematically ablates the model's coupled multimodal understanding during inference. Surprisingly, QUAG reveals that the models manage to maintain high performance even when injected with multimodal sub-optimality. Additionally, even after replacing self-attention in multimodal fusion blocks with “QUAG-attention”, a simplistic and less-expressive variant of self-attention, the models maintain high performance. This means that current VideoQA benchmarks and their metrics do not penalize shortcuts that discount joint multimodal understanding. Motivated by this, we propose the CLAVI (Counterfactual in LAnguage and VIdeo) benchmark, a diagnostic dataset for benchmarking coupled multimodal understanding in VideoQA through counterfactuals. CLAVI consists of temporal questions and videos that are augmented to curate balanced counterfactuals in language and video domains. Hence, it incentivizes, and identifies the reliability of learnt multimodal representations.
We evaluate CLAVI and find that models achieve high performance on multimodal shortcut instances, but have very poor performance on the counterfactuals. Hence, we position CLAVI as a litmus test to identify, diagnose and improve the sub-optimality of learnt multimodal VideoQA representations which the current benchmarks are unable to assess.
§ INTRODUCTION
Multimodal learning with videos and text is a challenging task. While both the modalities are sequential in nature, they possess unique underlying structures. Videos exhibit spatio-temporal dynamics in the pixel space, whereas language representation is composed of the syntax and semantics of word sequences. Hence, tasks like Video Question Answering (VideoQA) <cit.> present a considerable challenge as they necessitate the model to acquire accurate representations of both modalities, and establish meaningful connections between them. Transformers have demonstrated exceptional performance on VideoQA benchmarks <cit.>. Since they lack the intrinsic inductive biases for these representation, they must learn it from the data <cit.>. However, does the good performance of Transformers on current VideoQA benchmarks necessarily mean that they learn to faithfully represent and leverage the modalities? Or do the current benchmarks and metrics fail to robustly evaluate the models for their multimodal understanding?
This is a valid concern because deep learning models can learn shortcuts to achieve high performance scores without faithfully learning from the modalities. For example, seemingly spatio-temporal tasks, like some action classification problems, are shown to be solved without focusing much on temporal representations <cit.>. Similarly, for VideoQA, the questions that seemingly require the model to jointly leverage the multimodal representations can be answered using shortcuts (see Figure <ref>). This raises questions if the models are actually learning to leverage jointly leverage and understand the modalities or is the performance on the current benchmarks an illusion of joint multimodal learning.
First, we propose QUadrant AveraGe (QUAG), a lightweight and non-parametric probe to systematically gauge the reliance of the performance of a model on joint multimodal representations. We posit that joint multimodal understanding is enabled in the fusion layers by progressively attending to the informative tokens within and between the modalities. QUAG impairs components of modality fusion by systematic block-averaging of attention weights. We apply QUAG on multiple dataset-model combinations, and consistently find that the models manage to achieve high performance on the benchmarks without relying specific multimodal interactions.
This is concerning because high performance on established benchmarks should be ideally indicative of coupled multimodal understanding. We validate the sub-optimality in multimodal representations by replacing self-attention in pretrained models with simple and less-expressive QUAG-attention. QUAG-attention augmented models manage to maintain the high performance on standard benchmarks. However, this raises a follow-up question – how then can we benchmark coupled multimodal understanding in the models?
Thus, we create Counterfactual in LAnguage and VIsion (CLAVI), a diagnostic benchmark to robustly assess joint multimodal understanding in VideoQA models. Temporal understanding ideally requires coupled multimodal understanding. However, the standard benchmarks do not contain or assess performance on counterfactual instances. CLAVI contains balanced temporal counterfactuals in both question and video domains to accurately test if the models can jointly understand temporal cues in the question (temporal prepositions and adverbs) and the video (order of frames) domains. We develop consistent-accuracy metrics to precisely assess the contributions of shortcuts to circumvent joint multimodal understanding. We find that finetuned models have high-accuracy on shortcut instances in CLAVI, but have poor performance on the counterfactual instances that require coupled multimodal understanding. Hence, the performance of a model on CLAVI is indicative of joint multimodal understanding, which is overlooked by the existing benchmarks.
In summary, our contributions are (i) we develop QUAG, a systematic method to identify sub-optimalities in joint multimodal representations, (ii) using QUAG and QUAG-attention, we demonstrate that high performance on established VideoQA benchmarks is not representative of faithful coupled multimodal understanding, and (iii) we develop CLAVI, a new diagnostic benchmark that contains balanced temporal counterfactuals in videos and questions to confidently disambiguate the contributions of shortcuts in joint multimodal learning to benchmark the models.
§ RELATED WORK
Dataset Biases: Works in NLP <cit.>, vision <cit.> and vision-language <cit.> demonstrate that models can achieve high performance without even understanding the sequence of the embeddings. This is partly because the current benchmarks have unintended biases that could potentially be exploited by models to learn shortcuts; hence accuracy is not always a faithful metric <cit.>.
For VideoQA, MovieQA <cit.> and TVQA <cit.> datasets are biased towards plot understanding or actor dialogue comprehension <cit.>. Biases are not always immediately apparent; for example, Social-IQ <cit.> contains sentiment-biased annotations <cit.>. Moreover, statistical regularities like answer length, answer frequency <cit.> and co-occurrence <cit.> introduce spurious features. Overall, these biases allow the models learn shortcuts <cit.> that circumvent multimodal reasoning <cit.>. While synthetic VideoQA benchmarks such as VQuAD <cit.>, CLEVRER <cit.>, and MarioQA <cit.> have been carefully curated to mitigate many biases, they are unable to capture the intricate dynamics of the real world.
We curate CLAVI by systematically augmenting real-world videos to faithfully represent the complexity of the physical world while controlling the biases to confidently evaluate multimodal temporal understanding.
Shortcut Learning: Tangential to the bias amelioration methods <cit.>, <cit.> and <cit.> achieve state-of-the-art performance with simple models by leveraging VideoQA dataset shortcuts in the model. ATP <cit.> demonstrates single frame bias by re-training the models with an informative frame-selection module to achieve competitive performance. Perceptual Score <cit.> quantifies modality bias in terms of relative performance drop under modality-permutation operation.
QUAG combines these ideas to evaluate the dependence of models on shortcuts for circumventing multimodal understanding in terms of performance drop under multimodal representation collapse.
Unlike other works, it assists in identifying sub-optimal learnt representations in a combined model-dataset approach.
Leveraging Counterfactuals: We share our motivation for developing CLAVI with VQA-CP <cit.>: that iid train-test splits in the presence of strong priors leads to learning via shortcuts. However, rather than reducing the bias by mining new complementary image instances, CLAVI weakens prior of multimodal understanding in the first place with synthesized balanced video-question temporal hard-negatives.
Concurrent to our work, <cit.> and <cit.> have employed hard-negatives for improving verb-understanding in VideoQA models.
<cit.> use both – a real-world dataset by stitching two unrelated videos and a synthetic dataset for post-pretraining to improve the temporal understanding of video-language models. However, unlike CLAVI that uses synthesized negative video instance from the same video, stitched video dataset cannot be a robust diagnostic benchmark for temporal understanding because the incoherent contexts can be exploited as a static bias shortcut <cit.>.
§ DO VIDEOQA MODELS LEARN TO JOINTLY LEVERAGE THE MODALITIES?
./images
We posit that coupled multimodal understanding is enabled in the fusion layers by progressively attending to the informative tokens within and between the modalities. Hence, we propose QUAG to systematically ablate the effects of multimodal attention.
It impairs the joint multimodal representations in the pre-trained model by systematically block-averaging the attention weights to attend to all token uniformly. Based on the targeted modality-interactions, we define special cases of QUAG, collectively called short-circuit operations and analyze the performance drop.
§.§ Video question answering setup
The task is to predict the correct answer given a video-question tuple, (𝒱, 𝒯). A VideoQA model consists of a vision encoder F_𝒱: 𝒱→ℝ^L_𝒱× D, text encoder F_𝒯: 𝒯→ℝ^L_𝒯× D, and a multimodal fusion module M: (F_𝒱(𝒱), F_𝒯(𝒯))→ℝ^L × D, where L_𝒱 and L_𝒯 are the sequence lengths of video and text respectively and D is the dimensionality of the fusion model.
Consider M a composition of n attention-based multimodal fusion blocks, M = M_n ∘ M_n-1∘⋯ M_1. Each fusion block consists of attention, normalization, and token-mixing modules. For our analysis, we consider M to be composed of self-attention transformer blocks. That is, query, key, and value are the transformations of the same input sequence. Hence, X_𝒱𝒯 = [F_𝒱(𝒱) ‖ F_𝒯(𝒯)] ∈ℝ^(L_𝒱 + L_𝒯) × D is the input for M, where ‖ is concatenation operator. Since QUAG operates at inference time, we assume the VideoQA model to be finetuned and frozen.
§.§ QUAG: Ablation of modality interactions
Let X_i-1 denote the input of the fusion block M_i and let (Q_i, K_i, V_i) be its query, key, and value transformations and X_0 = X_𝒱𝒯. Then, the token-mixing operation is given by T_i = A_i V_i, where A_i = softmax(Q_i K_i^⊤) is the attention matrix (we omit the scaling factor √(d) for readability).
For Q_1u, K_1u, and V_1u to denote the query, key, and value projections of modality u for the first fusion block, M_1, we can simplify, A_1 and T_1 in terms of their partition blocks, referred to as quadrants henceforth, as:
A_1 =
softmax ([
[ Q_1𝒱^K^⊤_1𝒱 Q_1𝒱^K^⊤_1𝒯; ; Q_1𝒯^K^⊤_1𝒱 Q_1𝒯^K^⊤_1𝒯 ]] )
and
T_1 =
[
[ A^1_𝒱𝒱 A^1_𝒱𝒯; ; A^1_𝒯𝒱 A^1_𝒯𝒯 ]]
[
[ V_1𝒱^; ; V_1𝒯^ ]]
where A^1_u_1u_2 represents the quadrant of A_1 corresponding to (Q_1u_1^K^⊤_1u_2). Note that we skip layer normalization layers in the discussion for simplicity. Hence, we can simplify and write T_1 as:
T_1 =
[
[ A^1_𝒱𝒱 V_1𝒱^ + A^1_𝒱𝒯 V_1𝒯^; ; A^1_𝒯𝒱 V_1𝒱^ + A^1_𝒯𝒯 V_1𝒯^ ]]
We follow the same partition quadrants, as defined for A_1 in M_1, for A_j in the downstream fusion layer M_j and denote the quadrants as A^j_u_1u_2.
Next, we define row-wise average and replace operator ℛ that operates on a quadrant of a matrix to replace the values in the quadrant with the mean value of the respective partitioned-row. Note that the values in the other quadrants are unaffected. Given a matrix Z of size p × q and let W be the quadrant of Z with indices (p_1^W ⋯ p_2^W) × (q_1^W ⋯ q_2^W). We use [ ]_ij to index the element in row i and column j. Then,
[ℛ(Z,W)]_ij = ∑_k=q^W_1^q^W_2[Z]_ik/q^W_2 - q^W_1 + 1 i ∈ [p_1^W, p_2^W] and j ∈ [q_1^W, q_2^W]
[Z]_ij otherwise
We can now formally define the QUAG operator, ϕ, as:
ϕ(A_i, V_i, [s_1, s_2, ⋯, s_n]) = (ℛ_s_1∘ℛ_s_2⋯∘ℛ_s_n (A_i)) V_i
where s_i ∈{𝒯𝒯, 𝒯𝒱, 𝒱𝒯, 𝒱𝒱}, ℛ_s_i(Z) is short-hand for ℛ(Z,s_i), A_i and V_i are the attention and value matrices of M_i respectively. In implementation, we re-adjust the quadrant boundaries to ignore the padded elements. We provide the pseudo-code and a toy example in the supplementary material. Since we will be applying the QUAG operator on all the layers of M, for brevity, we denote Φ(M,S) = ∀_1 ≤ i ≤ n ϕ(A_i, V_i, S). Note that ϕ, and hence, Φ is independent of the order of elements in S.
§.§ Short-circuit operations
As QUAG is a generic method of probing multimodal fusion, we consider some special cases based on the value of S below. We call these operations collectively as short-circuiting operations:
1) S=[𝒱𝒱, 𝒯𝒯]: ϕ(A_1, V_1, [𝒱𝒱, 𝒯𝒯]) is equivalent to scaling the average values of V_1𝒱 and V_1𝒯 in the upper and lower blocks of T_1 respectively (as evident from Eqn. <ref>). Hence, in the upper block, video queries faithfully attend over text keys but uniformly over video keys. Likewise, text queries attend faithfully over video queries but uniformly over text queries in the lower block. We call such fusion block to be unimodal average conformable.
Having understood the trivial case, we prove by induction that Φ(M,[𝒱𝒱, 𝒯𝒯]) leads to unimodal average conformability of all the component fusion blocks in M. Consider a block M_j ∈ M such that j>1.
We want to show that unimodal average conformability of first {M_0, M_1, ⋯, M_j-1} blocks using ∀_1 ≤ i ≤ j-1 ϕ(A_i, V_i, [𝒱𝒱, 𝒯𝒯]) implies ϕ(A_j, V_j, [𝒱𝒱, 𝒯𝒯]) will make M_j unimodal average conformable.
The input of M_j can be decomposed into non-linear and linear (from the residual connection that skips the feed-forward layer of M_j-1) projections of T_j-1 + M_j-2∘ M_j-3⋯∘ M_1 (X_𝒱𝒯) + X_𝒱𝒯. Hence, when {M_0, M_1, ⋯, M_j-1} are unimodal average conformable, X_𝒱𝒯 is the only non-conformable component. And we have shown in the trivial case that ϕ(A_1, V_1, [𝒱𝒱, 𝒯𝒯]) makes M_1 conformable, hence M_j is also unimodal average conformable under ϕ.
Ultimately, Φ(M,[𝒱𝒱, 𝒯𝒯]) bypasses the effect of video-video attention and text-text attention. We prove that unimodal token-mixing is reduced to scaling the average of the modalities. We term this as unimodal short-circuiting. It ablates unimodal representations to analyze their dependence on the performance of the models.
Since the following cases can be proved similarly using induction, we skip the proofs for conciseness.
2) S = [𝒱𝒯, 𝒯𝒱]: Parallel to unimodal short-circuiting, ϕ(A_1, V_1, [𝒱𝒯, 𝒯𝒱]) is equivalent to scaling the average values of V_1𝒯 and V_1𝒱 in the upper and lower blocks of T_1 respectively. Video and text queries faithfully attend to video and text keys respectively while crossmodal attention in video-text is reduced to uniform attention. We term this effect as crossmodal short-circuiting. It is complementary to unimodal short-circuiting and assesses the importance of inter-modality token-mixing. It probes if the models actually learns by fusing the information between the two modalities or is it largely driven by unimodal biases within the modalities.
3) S = [𝒱𝒱, 𝒯𝒱]: This is equivalent to removing the effect of individual of video keys, resulting in averaging the components of video modality in the upper and lower blocks of all T_i. We call this video short-circuiting.
Similarly, S = [𝒯𝒯, 𝒱𝒯] leads to text short-circuiting.
§.§ QUAG-attention
Along with an assessment of multimodal understanding, QUAG enables a detailed analysis of token mixing for identifying the sub-optimality of learnt representations.
Hence, we use QUAG as an inspiration to propose QUAG-attention, a variant of self-attention that calculates similarities on already short-circuited sequences.
Let us consider the case such that the performance of M under video short-circuiting operation is comparable to its performance without any perturbation. If the input of M is X_0 = [F_𝒱(𝒱) ‖ F_𝒯(𝒯)], then during token-mixing we effectively average and scale the components in the upper-partition ([1, ⋯, L_𝒱] × D) of the value matrix in all the fusion blocks. This can be efficiently approximated by replacing the entire upper block with a single row-wise average token using ℛ before projecting to key and value domains. Note that the query remains unchanged.
Similar to QUAG, we perform no fine-tuning and only modify the calculation of self-attention.
We can generalize it to present new variants of self-attention: collectively known as QUAG-attention.
QUAG-attention operates by consistently averaging the corresponding modality blocks within the input of each fusion block. The averaging process occurs prior to the transformation of the input into keys and values.
Depending on the sub-optimalities in representation, QUAG-attention can be applied to only text, video or both the modalities. It reduces the number of keys and values tokens from (L_𝒱 + L_𝒯) to either (L_𝒯+1) (text-average), (L_𝒱 + 1) (video-average) or 2 (text-video-average).
The number of tokens in video and text modalities are generally different. However, due to block averaging, QUAG-attention reduces the effective number of tokens of the modality in key and value domains to one. The token-length mismatch would interfere with softmax operation in attention.
Hence, we scale the components of dot-product similarity scores of the averaged keys by the logarithm of the number constituting tokens (that is, the original number of tokens in the block). This is similar to proportional attention used by <cit.> for token-merging.
§.§ Experimental setting
Models and Datasets: We evaluate QUAG and QUAG-attention on JustAsk <cit.> and FrozenBiLM <cit.> models. We evalaute it on the following datasets (i) ActivityNet-QA <cit.>: contains 58K open-ended questions on 5.8K sampled videos from ActivityNet (ii) MSRVTT-QA
<cit.>: contains 244K open-ended questions on 10K MSRVTT videos (iii) NeXT-QA <cit.>: contains 47K 5-way multiple choice questions with one-correct answer from 5.4K videos. We also report results on the ATP-Hard subset of NeXT-QA <cit.> that contains a higher concentration of temporally challenging data requiring multi-frame understanding.
Implementation Details: All our experiments were performed on 4 NVIDIA A5000 GPUs. We use the official open-source code of the models on GitHub and modify only the self-attention modules. We use the official evaluation code and checkpoints. For NeXT-QA, we use the official dataset and fine-tune the models with the default parameters. More details in the supplementary material.
§.§ Analysis
The results are shown in Table <ref>. For comparison to the unperturbed model, we specify the baseline, language-only (performance without any video input) and video-only (performance without any text input) accuracies. Evidently, high performance in language-only setting, relative to the basline, in most of the cases is indicative of unimodal bias towards language.
The performance of FrozenBiLM on ActivityNet-QA and MSRVTT-QA drops by over 10% (43.6% to 32.3%; 46.6% to 32.8%) with crossmodal short-circuiting, and by 40% with both unimodal (43.6% to 2.4%; 46.6% to 1.0%) and text short-circuiting (43.6% to 1.4%; 46.6% to 1.0%). Furthermore, the drop is less than 1% under video short-circuiting (43.6% to 43.1%; 46.6% to 45.7%). This means that the model is leveraging unimodal interactions within the text and cross-modality interactions between video (query) and text (key). However, for NeXT-QA and ATP-Hard, since the performance does not drop under crossmodal short-circuiting, the model is not leveraging any crossmodal interactions. The performance drops to chance level, that is 20%, only under text and unimodal short circuiting operations and not video short-circuiting, which is indicative of strong unimodal bias towards the text modality. Similarly, for JustAsk, the performance does not drop by more than 1% for any of the datasets for any short-circuting operation. This shows that JustAsk achieves competitive performance on the benchmarks without even leveraging the rich representations within and between the modalities.
We use the results from QUAG to apply QUAG-attention on FrozenBiLM and JustAsk that reduce the number of multiplication operations by 13.6% and 68.0% respectively, for a less than 1% drop in performance consistently for all the datasets. However, this raises serious concerns because models can learn to hack their way around the accuracy metrics for leveraging shortcuts. The supposedly multimodal datasets contain biases and the evaluation metrics do not penalize shortcut learning and provide a false confidence about the abilities of the model. This raises the follow-up question: “How can we confidently benchmark multimodal understanding in VideoQA models?”
§ CLAVI
We propose CLAVI as a diagnostic dataset with balanced counterfactual in time for benchmarking joint coupled multimodal understanding in VideoQA. CLAVI consists of 6,018 videos and 114,342 questions (72,770 train and 41,572 test). The simple yes-no questions probe the absolute temporal location of a single action (beginning/end) or the occurrence sequence for a pair of non-overlapping actions (before/after). Using yes-no questions with balanced negative instances allows us to have questions that are unambiguous, and answers that are mutually exclusive and equally informative to not be eliminated by prior biased knowledge. To create temporal negatives in the question domain, we replace before with after and beginning with end and vice versa. Further, we create temporal negatives in the video domain by swapping only the action-segments in the video. We exhaustively consider all the compositions of temporal negatives in video and question domains to create balanced negative instances for systematic assessment of temporal understanding in videos.
§.§ Dataset Creation
We curate CLAVI by leveraging Charades-STA [https://prior.allenai.org/projects/data/charades/license.txt] <cit.>, containing 9,848 videos of humans performing actions based on a short script written by composing predefined vocabulary that describe multiple daily actions. The videos are annotated with the start and end times of each action. The action category, the start, and the end of each action segment are referred to as the action tuple.
Each video may contain more than two action tuples.
We select pairs of action tuples based on the uniqueness of the action category and complete exclusivity (that is no overlap between the occurrence of the actions).
In a given selected pair of action tuples, the two actions along with the inter-action region constitute the video segment.
We ensure that the two action categories in the pair are distinct.
Additionally, to address temporal boundary ambiguities in the annotations, we filter out segments where either of the selected action classes occurs in close proximity to the segment boundaries
We also extend the boundaries of the two actions in the pair. We define two boundary extensions – out-extension and in-extension. The out-extension encompasses regions that are not a part of the selected segment but extend outwards in both directions into the original video. Similarly, in-extension extends inwards into the inter-action segment. To avoid temporal position bias <cit.>, the lengths of the extension boundaries are selected randomly. However, since inter-action separation can affect their recognition <cit.>, we constraint the inter-action separation in the original and the corresponding negative video to be the same. That is, the sum of out-extension boundaries is always equal to the sum of in-extension boundaries.
We trim each boundary-extended contiguous segment from the original video to curate a positive video instance. To create the counterfactual video, we swap the boundary-extended action regions as shown in Figure <ref>. Note that the region between the boundary-extended actions is unaffected. Swapping operation preserves the actions but only alters their chronology, and can be applied independently to question negatives (unlike manipulations like video reversal <cit.>). This independence provides fine-grained control to create a balanced benchmark for comprehensive analysis.
We create three types of questions using pre-defined templates and action-class annotations:
1) Existence (E) type: The E-type questions for both the action classes follow the template "Was someone <A>?", where <A> is one of two action classes in video. We use it as a positive control to verify if the model is able to correctly recognize the action classes. We use the exact same question for negative video instance as well, totalling to 4 control (questions, video, answer) instances for a Charades-extracted video segment.
2) Beginning/End (BE) type: BE type questions the absolute location of the action in the video. The question is of the form, "Was the person <A> at the {beginning/end}?" where <A> is one of two action classes in the video, and we select one of beginning and end. Hence, for a given video and its negative, we have, in total, 8 instances of BE (questions, video, answer) tuples combined. Note that the answer for a given BE question is complemented in the negative video.
3) Before/After (BA) type: BA type comprises of questions on the relative order of occurrence of actions. The question is of the form "Did <A1> happen {after/before} <A2>?", where <A1> and <A2> are the selected action classes. We consider all the permutations of action classes. Hence, we have a total of 8 instances of BA type (questions, video, answer) tuples per extracted video. Similar to BE type, the answer is complemented in the negative video.
Further, we add negative controls for E and BA type questions. A negative control action is an action that does not occur in the video. Since we want to probe only for temporal understanding, we keep the negative control action-class easy to detect by randomly selecting an action-class that does not contain any of the objects or actions in the original video.
Hence, answering the negative control does not require understanding temporal cues in language and video and can be answered by object elimination. It serves the dual purpose of sanity check of learning and a baseline for learning by temporal shortcuts. The answer of negative control questions is always false. This adds two E type and sixteen BA type negative control questions for the video and its negative combined. Hence, including the negative control questions, each video in CLAVI is associated with 19 questions: 2 E, 4 BE, 4 BA, 1 E negative control and 8 BA negative controls. The ratio of "yes":"no" answers is 6:13.
We want to evaluate the sensitivity of the model to the temporal cues in language and video independently. Hence, we define consistent accuracies. If the model predicts the answers for a given question correctly for both – the video and its counterfactual, it is called video-consistent. Similarly, for a given video, if the model predicts the answers to the question and its counterfactual question correctly, it is called text-consistent. The proportion of video and question consistent predictions are reported as video-consistent accuracy (CAcc_𝒱) and text-consistent accuracy (CAcc_𝒯) respectively.
§.§ Experiment
We fine-tune and evaluate 4 recent models: JustAsk <cit.>, FrozenBiLM <cit.>, Singularity-Temporal <cit.> and All-In-One+ <cit.> on CLAVI using the official fine-tuning instructions. We follow the same experimental settings as discussed in Section <ref>. To account for class imbalance in the answers, we use balanced accuracy for validation and testing.
We summarize the results in Table <ref>. All the models have greater than 70% performance on balanced accuracy metric. However, consistent accuracies for both videos and text are lower than balanced accuracy.
We analyze the consistent accuracies on counterfactual and control subsets. Text and video consistent accuracies are greater than 80% the control benchmarks. This is because, unlike the counterfactual subset, performance on the control subset does not require coupled understanding of time in both video and text domains. That is, irrespective of the context of the negative control action in the question and the location of the object in the frame sequence, the models can learn to answer it correctly by relying on object detection. However, for achieving high consistent accuracies on the counterfactual subset, the model needs to jointly understand the order of the events and the temporal words in the question along with the order of the events in the video. We get significantly lower consistent accuracies (less than 15%) for the counterfactual subset, except for FrozenBiLM, which means that the other models are unable to learn and leverage joint multimodal representations in CLAVI. How can we be sure that FrozenBiLM performs good because it learns faithful multimodal representations and not some spurious shortcuts? We find that the video-average variant of QUAG-attention on FrozenBiLM cause the CAcc_𝒯-counter and CAcc_𝒱-counter to drop to 28% and 3.55% respectively, while CAcc_𝒯-control and CAcc_𝒱-control are still close to the original near-perfect values.
Since other models have a very low performance on CLAVI but are able to achieve high performance on standard benchmarks, it makes the reliability of existing VideoQA datasets questionable.
§ LIMITATIONS AND FUTURE WORK
Our dataset is intentionally simple, so as to focus the benchmark only on simple temporal sequence understanding, which preempts spatio-temporal referential understanding. We plan to include more complex temporal organizations of action classes like containment and partial-overlap that are defined using prepositions like during and while in future work. As the current state-of-the-art models catch-up to our benchmark, our future plan is to curate a more complex dataset with more natural questions that include temporal referring expressions with similar balanced doubly-negative strategy. Potential Negative Societal Impact: We do not analyze the Charades videos, and hence neither the Charades-derived CLAVI, for systemic biases against race, gender and age which might introduce unfair biases in the model.
§ CONCLUSION
We introduced QUAG for conducting a systematic analysis of learnt multimodal representations. It provides interesting insights into how the models are able to infer the answers from the videos and questions. The fine-grained analysis of the fusion of the modalities through QUAG helps to identify the sub-optimality in leveraging the multimodal representations of text and videos jointly. Further, we introduce a new diagnostic benchmark, CLAVI, that penalizes the lack of joint multimodal understanding which is overlooked by the existing datasets. Our methods of probing the multimodal interactions and diagnosing through counterfactuals are generic and can be extended to other multimodal tasks to get valuable insights.
We are positive that CLAVI and QUAG can be employed to systematically evaluate, diagnose and ultimately improve, not just the performance but the representations learnt by the VideoQA models.
§ APPENDIX
§ QUAG
§.§ Toy Example
Consider the toy example in Fig <ref>. The left-most matrix is the input matrix. As per the definition of ϕ, we can write, ϕ(Z, [𝒯𝒯, 𝒱𝒱]) = ℛ_𝒯𝒯∘ℛ_𝒱𝒱(Z). We demonstrate the successive application of ℛ operator in the example. Note that the padding is ignored; this is equivalent to applying ℛ to the padding-free sub-partition of the quadrant. Also, as illustrated in the example, since the quadrants cannot overlap, the sequence of application of ℛ does not matter.
§.§ Code
Below is the implementation of QUAG as an augmentation of the existing self-attention function. We use row-wise average and replace operation in each if-clause statements, while ignoring the padding, to ablate the effect of the quadrtant.
[language=Python]
def self_attention(inputs, mask, dim_model, l_v, l_t, quads):
# Inputs:
# inputs: Tensor of shape (batch_size, sequence_length, dim_model)
# mask: Tensor of shape (batch_`size, sequence_length)
# dim_model: Dimension of the model (e.g., 512)
# l_v: int maximum length of video tokens
# l_t: int maximum length of question tokens
# quads: list containing elements from 'VV', 'VT', 'TV', 'TT'
query = linear_transform_query(inputs)
key = linear_transform_key(inputs)
value = linear_transform_value(inputs)
attention_scores = compute_attention_scores(query, key, mask)
apply_quag(attention_scores, mask, l_v, l_t, quads)
attended_output = apply_attention_scores(attention_scores, value)
return attended_output
def compute_attention_scores(query, key, mask):
scaled_dot_product = dot_product(query, key) / sqrt(dim_model)
attention_scores = softmax(scaled_dot_product + (1 - mask) * -1e9)
return attention_scores
def apply_quag(attention_scores, mask, l_v, l_t, quads):
if 'VV' is in quads:
replace_with_rowwise_average(attention_scores[:, :l_v, :l_v], mask[:, :l_v, :l_v])
if 'VT' is in quads:
replace_with_rowwise_average(attention_scores[:, :l_v, -l_t:], mask[:, :l_v, -l_t:])
if 'TV' is in quads:
replace_with_rowwise_average(attention_scores[:, -l_t:, :l_v], mask[:, -l_t:, :l_v])
if 'TT' is in quads:
replace_with_rowwise_average(attention_scores[:, -l_t:, -l_t:], mask[:, -l_t:, -l_t:])
def replace_with_rowwise_average(scores, mask):
rowwise_sum = sum(scores, axis=-1)
rowwise_mean = rowwise_sum / sum(mask, axis=-2)
expanded_rowwise_mean = expand_dims(rowwise_mean, axis=-1)
replace_elements(scores, expanded_rowwise_mean)
def apply_attention_scores(attention_scores, value):
attended_output = dot_product(attention_scores, value)
return attended_output
Next, we provide the code for QUAG-attention. QUAG-attention modifies the existing self-attention block in the fusion module by replacing the block with the block average. We also demonstrate the normalizing the softmax function so that the each single average sequence is representative of the constituent sequences.
[language=Python]
def quag_attention(inputs, mask, dim_model, l_v, l_t, type):
# Inputs:
# inputs: Tensor of shape (batch_size, sequence_length, dim_model)
# mask: Tensor of shape (batch_size, sequence_length)
# dim_model: Dimension of the model (e.g., 512)
# l_v: int maximum length of video tokens
# l_t: int maximum length of question tokens
# type: one of 'text', 'video', 'text-video'
query = linear_transform_query(inputs)
avg_input = compute_avg_input(inputs, l_v, l_t, type)
key = linear_transform_key(avg_input)
value = linear_transform_value(avg_input)
mask = apply_mask(mask, l_v, l_t, type)
scaled_dot_product = compute_scaled_dot_product(query, key, dim_model, mask)
attention_scores = softmax(scaled_dot_product)
attended_output = apply_attention_scores(attention_scores, value)
return attended_output
def compute_avg_input(inputs, l_v, l_t, type):
if type == "video":
avg_upper_block = sum(inputs[:, :l_v, :], axis=-2)
avg_upper_block = expand_dims(avg_upper_block, axis=1)
avg_input = concatenate((avg_upper_block, inputs[:, :-l_t, :]), axis=1)
elif type == "text":
avg_lower_block = sum(inputs[:, :-l_t, :], axis=-2)
avg_lower_block = expand_dims(avg_lower_block, axis=1)
avg_input = concatenate((inputs[:, :l_v, :], avg_lower_block), axis=1)
elif type == "text-video":
avg_upper_block = sum(inputs[:, :l_v, :], axis=-2)
avg_upper_block = expand_dims(avg_upper_block, axis=1)
avg_lower_block = sum(inputs[:, :-l_t, :], axis=-2)
avg_lower_block = expand_dims(avg_lower_block, axis=1)
avg_input = concatenate((avg_upper_block, avg_lower_block), axis=1)
return avg_input
def apply_mask(mask, l_v, l_t, type):
mask = expand_dims(mask, axis=-1)
mask = tile(mask, [1, 1, sequence_length])
if "video" in type:
video_length = sum(mask[:, :l_v, 0], axis=1)
video_length = expand_dims(video_length, axis=-1)
scaled_dot_product[:, :, 0] = scaled_dot_product[:, :, 0] * log(video_length)
upper_mask = ones(mask.shape[0], mask.shape[1], 1)
mask = concatenate((upper_mask, mask[:, :, l_v:]), axis=-1)
if "text" in type:
text_length = sum(mask[:, :-l_t, 0], axis=1)
text_length = expand_dims(text_length, axis=-1)
scaled_dot_product[:, :, -1] = scaled_dot_product[:, :, -1] * log(text_length)
lower_mask = ones(mask.shape[0], mask.shape[1], 1)
mask = concatenate((mask[:, :, :-l_t], lower_mask), axis=-1)
return mask
def compute_scaled_dot_product(query, key, dim_model, mask):
scaled_dot_product = dot_product(query, key) / sqrt(dim_model)
return scaled_dot_product
def apply_attention_scores(attention_scores, value):
attended_output = dot_product(attention_scores, value)
return attended_output
§.§ Experiment Details
As mentioned in the main manuscript, we use the official checkpoints and code of JustAsk https://github.com/antoyang/just-ask[website] and FrozenBiLM https://github.com/antoyang/FrozenBiLM[website]. For all the experiments with JustAsk, we use the checkpoints of the model pretrained on HowToVQA69M and WebVidVQA3M. For FrozenBiLM, we use the WebVid10M-pretrained checkpoint for all our experiments. Since QUAG operates at inference time, we do not need to perform any training. Since the model owners do not report results on NeXT-QA, we fine-tune the models with the official recipe to achieve performance similar to that independently reported by others <cit.>. While FrozenBiLM can alsi take subtitles as the input, for fair comparison, we do not pass it in any of the experiments. We provide the hardware details in the main manuscript.
§ CLAVI
§.§ Comprehensive List of Questions
We provide a comprehensive list of the questions for the example presented in Fig 2 of the main paper.
We define the actions as:
A: turning on a light
B: holding some clothes
C: washing a mirror,
where action A occurs before action B in the original video and action C does not occur anywhere in the original video.
Enlisted below are the questions and its negatives (Q and Q' respectively) for the video (V) (that is event A occurs after event B):Note that the color of the panel is representative of the answer of the question (red: "yes", green: "no").
E-Type:
Q': Was someone turning on a light?
Q': Was someone holding some clothes?
E-Type (negative control):
Q': Was someone washing a mirror?
BE-Type
Q': Was the person turning on a light at the beginning?
Q': Was the person turning on a light at the end?
Q': Was the person holding some clothes at the end?
Q': Was the person holding some clothes at the beginning?
BA-Type
Q': Did turning on a light happen before holding some clothes?
Q': Did turning on a light happen after holding some clothes?
Q': Did holding some clothes happen after turning on a light?
Q': Did holding some clothes happen before turning on a light?
BA-Type (negative-control)
Q': Did washing a mirror happen before turning on a light?
Q': Did washing a mirror happen after turning on a light?
Q': Did turning on a light happen before washing a mirror?
Q': Did turning on a light happen after washing a mirror?
Q': Did washing a mirror happen before holding some clothes?
Q': Did washing a mirror happen after holding some clothes?
Q': Did holding some clothes happen before washing a mirror?
Q': Did holding some clothes happen after washing a mirror?
Enlisted below are the questions and its negatives (Q and Q' respectively) for the negative video instance (V') (that is event B occurs after event A).
E-Type:
Q': Was someone turning on a light?
Q': Was someone holding some clothes?
E-Type (negative control):
Q': Was someone washing a mirror?
BE-Type
Q': Was the person turning on a light at the beginning?
Q': Was the person turning on a light at the end?
Q': Was the person holding some clothes at the end?
Q': Was the person holding some clothes at the beginning?
BA-Type
Q': Did turning on a light happen before holding some clothes?
Q': Did turning on a light happen after holding some clothes?
Q': Did holding some clothes happen after turning on a light?
Q': Did holding some clothes happen before turning on a light?
BA-Type (negative-control)
Q': Did washing a mirror happen before turning on a light?
Q': Did washing a mirror happen after turning on a light?
Q': Did turning on a light happen before washing a mirror?
Q': Did turning on a light happen after washing a mirror?
Q': Did washing a mirror happen before holding some clothes?
Q': Did washing a mirror happen after holding some clothes?
Q': Did holding some clothes happen before washing a mirror?
Q': Did holding some clothes happen after washing a mirror?
§.§ Dataset Metrics
The duration of individual action in CLAVI lies in the range [4.0 sec, 36.0 sec]; the average length of action is 7.7 ± 3.42 sec. The average video length is 19.95 ± 7.34 secs and the range is [8.67 sec, 65.73 sec]. We plot the distribution of the action and video durations in Fig. <ref>.
CLAVI cons sits of 141 unique action classes. Each action class is composed of noun (objects) and verb. There are 37 unique noun classes and 28 unique verb classes. We show the frequency distributions of action, verb and noun classes in Fig <ref>.
§.§ Experiment Details
As mentioned in the main manuscript, we use the official checkpoints, finetuning code and hyper-parameters of JustAsk https://github.com/antoyang/just-ask[website], FrozenBiLM https://github.com/antoyang/FrozenBiLM[website] , Singularity-Temporal https://github.com/jayleicn/singularity[website], and All-in-one+ https://github.com/showlab/all-in-one[website]. For JustAsk, we use the checkpoint of the model pretrained on HowToVQA69M and WebVidVQA3M. For FrozenBiLM, we use the WebVid10M-pretrained checkpoint. All-in-one+ is pretrained on eight datasets comprising of both images and videos (videos: Webvid, YT-Temporal-180M, HowTo100M and images: CC3M, CC12M, COCO, Visual Genome, SBU Captions). Singularity-Temporal is pretrained on a 17.28M images and video subset (images: COCO, Visual Genome, SBU Captions, CC3M, CC12M and videos: WebVid).
unsrtnat
|
http://arxiv.org/abs/2306.01471v1
|
20230602115221
|
Guiding Text-to-Text Privatization by Syntax
|
[
"Stefan Arnold",
"Dilara Yesilbas",
"Sven Weinzierl"
] |
cs.CL
|
[
"cs.CL",
"cs.CR",
"cs.LG"
] |
[
*
=====
Metric Differential Privacy is a generalization of differential privacy tailored to address the unique challenges of text-to-text privatization. By adding noise to the representation of words in the geometric space of embeddings, words are replaced with words located in the proximity of the noisy representation. Since embeddings are trained based on word co-occurrences, this mechanism ensures that substitutions stem from a common semantic context. Without considering the grammatical category of words, however, this mechanism cannot guarantee that substitutions play similar syntactic roles. We analyze the capability of text-to-text privatization to preserve the grammatical category of words after substitution and find that surrogate texts consist almost exclusively of nouns. Lacking the capability to produce surrogate texts that correlate with the structure of the sensitive texts, we encompass our analysis by transforming the privatization step into a candidate selection problem in which substitutions are directed to words with matching grammatical properties. We demonstrate a substantial improvement in the performance of downstream tasks by up to 4.66% while retaining comparative privacy guarantees.
§ INTRODUCTION
From compliance with stringent data protection regulations to building trust, privacy emerged as a formidable challenge to applications that build on user-generated data, and consensus exists regarding the need to safeguard user privacy.
In the context of text analysis, privacy is typically protected by sanitizing personally identifiable information from the text via ad-hoc filtering or anonymization. The literature is replete with naïve approaches that either redact words from the text or insert distractive words into the text. Using generalization and suppression on quasi-identifiers, an intuitive way of expressing privacy is presented by k-anonymity <cit.> and its notable adaptations for text data <cit.>.
However, these approaches are fundamentally flawed. Incapable of anticipating an adversary's side knowledge, most anonymization schemes are vulnerable to re-identification and thus provably non-private. As text conveys seemingly innocuous information, researchers demonstrated that this information can be leveraged to identify authorship <cit.> or disclose identifiable information <cit.>. <cit.>, for instance, recovered verbatim text from the training corpus using black-box querying to a language model.
Building upon noise calibration, Differential Privacy (DP) <cit.> attracted considerable attention for their robust notion of privacy. For text analysis, DP is applied to the vector-valued representation of text data <cit.>.
We focus on Metric Differential Privacy <cit.>, in which data is processed independently, similar to the setting of randomized response <cit.>. To avoid the curse of dimensionality of randomized response, noise is scaled by a general distance metric. For text-to-text privatization, <cit.> adopted a distance metric so that words that are close (i.e. more similar) to a word are assigned with a higher substitution probability than those that are more distant (i.e. less similar). This requires that the text is mapped onto a continuous embedding space <cit.>. Proceeding from the embedding, each word in the text is privatized by a three-step protocol: (1) retrieving the vector representation of the word, (2) perturbing the vector representation of the word with noise sampled from a multivariate distribution, and (3) projecting the noisy representation of the word back to the discrete vocabulary space. As the noisy representations are unlikely to exactly represent words in the embedding space, a nearest neighbor approximation is returned.
Since text-to-text privatization operates directly on embeddings and words in the embedding space are mapped based on co-occurrences, words tend to be substituted by words that stem from a common semantic context. However, there is no guarantee that words are substituted by words that serve similar roles within the grammatical structure of a text. Motivated by the example of sentiment analysis, in which sentiment is typically expressed by adjectives and forms of adjectives <cit.>, we hypothesize that substitutions strictly based on co-occurrences may degrade downstream performance. This hypothesis is in line with linguists finding repeated evidence for the relevance of grammatical properties for language understanding <cit.>.
We summarize our contributions as follows:
∙ We investigate text-to-text privatization via metric differential privacy in terms of its capability to preserve the grammatical properties of words after substitution. We find that privatization produces texts that consist to a large extent of incoherent nouns.
∙ We incorporate grammatical categories into the privatization step in the form of a constraint to the candidate selection. We demonstrate that broadening the candidate pool to k>1 (instead of k=1) and selecting a substitution with matching grammatical properties amplifies the performance in downstream tasks while maintaining an equivalent level of privacy.
§ PRELIMINARIES
§.§ Differential Privacy
Differential Privacy (DP) <cit.> emerged as a robust notion for privacy applied in privacy-preserving data mining and machine learning. Due to its composability and robustness to post-processing regardless of an adversary’s side knowledge, it formalizes privacy without the critical pitfalls of previous anonymization schemes. To ensure a consistent understanding of the algorithmic foundation of differential privacy, we present a brief taxonomy and a formal definition of the variants used for text analysis.
Formally, a randomized mechanism ℳ: 𝒟→ℛ with domain 𝒟 and range ℛ satisfies ε-indistinguishability if any two adjacent inputs d,d’ ∈𝒟 and for any subset of outputs S ⊆ℛ it holds that:
ℙ[ℳ(d) ∈ S]/ℙ[ℳ(d') ∈ S]≤ e^ε.
At a high level, a randomized mechanism is differentially-private if the output distributions from two adjacent datasets are (near) indistinguishable, where any two datasets are considered adjacent that differ in at most one record. An adversary seeing the output can therefore not discriminate if a particular observation was used. This notion of indistinguishability is controlled by the parameter ε acting as a privacy budget. It defines the strength of the privacy guarantee (with ε→ 0 representing strict privacy and ε→∞ representing the lack of privacy). To enhance the accounting of the privacy budget, several relaxations exist <cit.>.
Depending on the setting, DP can be categorized into global DP <cit.> and local DP <cit.>.
Global DP addresses the setting in which privacy is defined with respect to aggregate statistics. It assumes a trusted curator who can collect and access raw user data. The randomized mechanism is applied to the collected dataset to produce differentially private output for downstream use. With noise drawn from a predetermined distribution, the design of the randomized mechanism builds upon an additive noise mechanism. Commonly used distributions for adding noise include Laplace and Gaussian distribution <cit.>. The noise is further calibrated according to the function’s sensitivity and the privacy budget. This technique is useful for controlling the disclosure of private information of records processed with real-valued and vector-valued functions.
Local DP addresses the setting in which privacy is defined with respect to individual records. In contrast to global DP, local DP does not rely on a trusted curator. Instead of a trusted curator that applies the randomized mechanism, the randomized mechanism is applied to all records independently to provide plausible deniability <cit.>. The randomized mechanism to achieve local DP is typically Randomized Response (RR) <cit.>, which protects private information by answering a plausible response to the sensitive query.
Since we aim for text-to-text privatization, formulating DP in the local setting through RR appears to be a natural solution. However, the strong privacy guarantees constituted by RR impose requirements that render it impractical for text. That is, RR requires that a sentence s must have a non-negligible probability of being transformed into any other sentence s^' regardless of how unrelated s and s^' are. This indistinguishability constraint makes it virtually impossible to enforce that the semantics of a sentence s are approximately captured by a privatized sentence s^'. Since the vocabulary size can grow exponentially large in length |s|, the number of sentences semantically related to s becomes vanishingly small probability under RR <cit.>.
§.§ Metric Differential Privacy
Metric Differential Privacy <cit.> is a generalization of differential privacy that originated in the context of location-based privacy, where locations close to a user are assigned with a high probability, while distant locations are given negligible probability. By using word embeddings as a corollary to geo-location coordinates, metric differential privacy was adopted from location analysis to textual analysis by <cit.>.
We follow the formulation of <cit.> for metric differential privacy in the context of textual analysis. Equipped with a discrete vocabulary set 𝒲, an embedding function ϕ : 𝒲→ℝ, where ℝ represents a high-dimensional embedding space, and a distance function d: ℝ×ℝ→ [0,∞) satisfying the axioms of a metric (, identity of indiscernibles, symmetry, and triangle inequality), metric differential privacy is defined in terms of the distinguishability level between pairs of words. A randomized mechanism ℳ:𝒲→𝒲 satisfies metric differential privacy with respect to the distance metric d(·) if for any w,w^',ŵ∈𝒲 the output distributions of ℳ(w) and ℳ(w^') are bounded by Equation <ref> for any privacy budget ε > 0:
ℙ [ℳ(w) = ŵ]/ℙ [ℳ(w^') = ŵ]≤ e^ε d{ϕ(w),ϕ(w^')}.
This probabilistic guarantee ensures that the log-likelihood ratio of observing any word ŵ given two words w and w’ is bounded by ε d{ϕ(w),ϕ(w’)} and provides plausible deniability <cit.> with respect to all w ∈𝒲. We refer to <cit.> for a complete proof of privacy. For ℳ to provide plausible deniability, additive noise is in practice sampled from a multivariate distribution such as the multivariate Laplace distribution <cit.> or truncated Gumbel distribution <cit.>.
We recall that differential privacy requires adjacent datasets that differ in at most one record. Since the distance d(·) captures the notion of closeness between datasets, metric differential privacy instantiates differential privacy when Hamming distance is used, , if ∀ x,x': d{ϕ(w),ϕ(w^')} = 1. Depending on the distance function d(·), metric differential privacy is therefore generally less restrictive than differential privacy. Intuitively, words that are distant in metric space are easier to distinguish compared words that are in close proximity. Scaling the indistinguishability by a distance d(·) avoids the curse of dimensionality that arises from a large vocabulary 𝒲 and allows the mechanism ℳ to produce similar substitutions ŵ for similar w and w^'. However, this scaling complicates the interpretation of the privacy budget ε, as it changes depending on the metric employed.
§.§ Related Work
Grounded in metric differential privacy, text-to-text privatization implies that the indistinguishability of substitutions of any two words in the vocabulary is scaled by their distance.
<cit.> achieve this indistinguishability by generating a bag-of-words representation and applying the Earth Mover’s distance to obtain privatized bags.
In contrast to a bag-of-words representation, <cit.> formalized text-to-text privatization to operate on continuous word embeddings. Word embeddings capture the level of semantic similarity between words and have been popularized by efficient embedding mechanisms <cit.>. This mechanism was termed .
The issue with this mechanism is that the magnitude of the noise is proportional to the dimensionality of the vector representation. This translates into adding the same amount of noise to any word in the embedding space, regardless of whether this word is located in a dense or sparse region. For words in densely populated areas, adding noise that is large in magnitude renders it difficult for the mechanism to select reasonable substitutions, as nearby relevant words cannot be distinguished from other nearby but irrelevant words. For words in sparsely populated areas, adding noise of small magnitude renders the mechanism susceptible to reconstruction, as the word closest to a noisy representation is likely to be the original word.
To tackle some of the severe shortcomings of , a variety of distance metrics have been employed to scale the indistinguishability, including Hamming distance <cit.>, Manhattan distance <cit.>, Euclidean distance <cit.>, Mahalanobis distance <cit.> and Hyperbolic distance <cit.>.
While related extensions have focused almost exclusively on geometric properties to enhance text-to-text privatization, we focus on linguistic properties. We extend by a candidate selection that directs substitutions based on matching grammatical properties and demonstrate that multivariate perturbations supported by grammatical properties substantially improve the utility of the surrogate texts in downstream tasks.
§ METHODOLOGY
Since text-to-text privatization operates directly on geometric space of embeddings, it is necessary to understand the structure of the embedding space. To get an understanding of the embedding space, we selected a subset of 1,000 most frequent words from the 100-dimensional embedding and manifolded them onto a two-dimensional representation. Enriched by grammatical properties derived from the universal part-of-speech tagset <cit.>, we chart a t-distributed stochastic neighbor embedding <cit.> in Figure <ref>.
We note that we derived each word's grammatical category without context, which may explain the general tendency towards nouns (presumably misclassified verbs). Regardless of potentially misclassified grammatical categories, we can draw the following conclusions: while nouns, verbs, and adjectives are distributed throughout the embedding space, we find distinct subspaces for numerals and punctuation. This is because word embeddings are trained towards an objective that ensures that words occurring in a common context have similar embeddings, disregarding their syntactic roles within the structure of a text. Considering that text-to-text privatization typically selects the nearest approximate neighbor after the randomized mechanism is queried as substitution, we expect this mechanism to fall short in producing syntactically coherent texts.
We adopt the multivariate Laplace mechanisms of <cit.>. Aimed at preserving the grammatical category of a word after its substitution, we incorporate a constraint into the candidate selection that directs the randomized mechanism towards words with a matching grammatical category. This constraint is incorporated as follows: we create a dictionary that serves as a lookup table for the grammatical category of each word in the vocabulary and generalize the randomized mechanism to return a flexible k ≫ 1 (instead of k=1) approximate nearest neighbors. If available, a word is replaced by the nearest word (measured from the noisy representation) that matches its grammatical category. Otherwise, the protocol reduces to canonical . The computational overhead of the candidate selection is O(log k).
This modification introduces the size of the candidate pool k as an additional hyperparameter. Intuitively, k should be chosen based on the geometric properties of the embedding, , k should be large enough to contain at least one other word with a matching grammatical category.
We investigate our modification to in terms of its capability to preserve grammatical properties and its implications. For reasons of reproducibility, we base all experiments on the 100-dimensional embedding.
To keep the computational effort feasible, we formed a vocabulary that consists of 24,525 words reflecting a natural distribution of grammatical categories: 26 pronouns, 5,000 nouns, 5,000 verbs, 5,000 adjectives, 4,341 adverbs, 92 adpositions, 5,000 numerals, 6 conjunctions, 2 particles, 39 determiner, and 19 punctuations.
Once we determined our sub-vocabulary, we calculated the necessary size of the candidate pool k. We counted the number of steps required from each word in our subset until a neighbor with a matching category was found. Averaging this count revealed that each word is linked to another word with a matching category within a neighborhood of 20. We thus parameterized the candidate pool to a fixed k=20 across all experiments.
§ EXPERIMENTS
We conduct a series of experiments at a strategically chosen set of privacy budgets ε = {5,10,25} to demonstrate the relevance of directing substitution to words that share similar syntactic roles rather than restricting substitution only to words that appear in a similar semantic context.
These privacy budgets represent three privacy regimes: ε=5 for high privacy, ε=10 for moderate privacy, and ε=25 for low privacy.
§.§ Linguistic Analysis
We intend to assess the effectiveness of our constraint to the candidate selection in retaining grammatical properties of words after substitution. We query each word contained in the vocabulary 100 times and record the grammatical category for its surrogate word in the form of a frequency count.
Given a moderate privacy budget of ε = 10, Figure <ref> visualizes the calculated frequency counts similar to a confusion matrix. The diagonal represents the preservation capability of grammatical categories, , universal part-of-speech tags. A comparison across ε∈{5,10,25} is deferred to Figure <ref> in the Appendix <ref>.
We start with the examination of the baseline mechanism in Figure <ref>. Consistent with the independent and concurrent results of <cit.>, our results indicate that the privatization mechanism is likely to cause grammatical errors. <cit.> estimate that the grammatical category changes in 7.8%, whereas we calculated about 45.1% for an identical privacy budget. This difference arises from the fact that <cit.> only consider the four most frequent categories of nouns, verbs, adjectives, and adverbs, while we consider eleven categories according to the universal part-of-speech tagset. In addition to the number of grammatical categories, we indicate the fluctuations between categories, while <cit.> only measures whether a category was changed. Owing to the tracking of the fluctuations, we find a disparate impact on the preservation of the grammatical categories. We find that the preservation of grammatical categories of words declines with growing guarantees for privacy, until the text after privatization consist almost entirely of nouns.
We compare these results to our constrained mechanism in Figure <ref>. With the introduction of a constrained candidate pool of size k=20, we observe an increased likelihood that surrogate texts retain the grammatical structure of the original texts. This can be seen by the dominance of the vertical line in Figure <ref> compared to initial signs of a diagonal line in Figure <ref>. Compared to the baseline value 45.1%, the preservation capability bounds at 81.4%.
We illustrate the alignment of grammatical properties between words from a sensitive text and and their surrogate words with an example sentence in Figure <ref>. We note that our syntactic guidance prevents words from being misleadingly replaced by numbers (and vice versa), as in the case of before being replaced by 1979.
§.§ Geometric Analysis
Intuitive properties for analyzing a mechanism operating on embeddings include magnitude, direction, and orthogonality. Since embeddings capture word co-occurrences, we expect most substitutions to be located in the same region of an embedding space and in the same direction from the embedding origin.
We aim to measure the differences in the Euclidean distance of words with those of their corresponding substitutes generated by baseline ℳ(w) and our constraint ℳ^'(w). The results capture w - ŵ and w - ŵ^', respectively. Since the distances are zero when w = ŵ or identical when ŵ = ŵ^', we are only interested in the distances when a substitution has occurred and the mechanisms decided on a distinct candidate for their substitution, , ℳ(w) ≠ℳ^'(w) ≠ w.
Figure <ref> depicts the calculated distances for querying words from our subset 100 times. The distance approximation was carried out at a strategically chosen discrete set of values of ε = {5,10,25}. Since the distance is calculated as the difference between words and their substitutes, lower values indicate better substitutions. The distances depend on the amount of noise injected into the randomized mechanisms. The more noise, the larger the distances. Apparent across all privacy budgets, the distances between words and their substitutions are slightly shifted towards smaller distances. Since the distributions of distances are almost identical, we can take a principled guess that substitution in both mechanisms generally occurs within a similar region of the embedding space.
§.§ Privacy Analysis
Confronted with a non-zero probability that the candidate pool contains the sensitive word and no other word exists in the candidate pool with matching grammatical properties, it could be argued that the privacy guarantees suffer from the increased risk of self-substitution. By calculating the plausible deniability <cit.>, we evaluate the risk of self-substitution arising from our grammatically constrained candidate selection.
In line with previous studies on text-to-text privatization <cit.>, we record the following statistics as proxies for plausible deniability.
∙ N_w = ℙ{ M(w) = w } measures the probability that a word is not substituted by the mechanism. This is approximated by counting the number of times a word w is substituted by the same word after running the mechanism 100 times.
∙ S_w = |ℙ{ M(w) = w^'}| measures the effective support in terms of the number of distinct substitutions produced for a word from the mechanism. This is approximated by the cardinality of the set of words w^‘ after running the mechanism 100 times.
Since the noise is scaled by 1/ε, we can make a connection between the proxy statistics and the privacy budget ε. A smaller ε corresponds to a more stringent privacy guarantee. Adding more noise to the vector representation of a word results in fewer self-substituted words (lower N_w) and a more diverse set of distinct substitutions (higher S_w). A higher ε corresponds to a less stringent privacy guarantee. This translates into less substitutions (higher N_w) and a narrow set of distinct substitutions (lower S_w). From a distributional perspective, it follows that N_w (S_w should be positively (negatively) skewed to provide reasonable privacy guarantees.
For privacy budgets of ε = {5,10,25}, we present the distribution of N_w and S_w over 100 independent queries Figure <ref>. While lower values of ε are desirable from a privacy perspective, it is widely known that text-to-text privatization requires slightly larger privacy budgets to provide reasonable utility in practice. Values of ε up to 20 and 30 have been reported in related mechanisms <cit.>. The histograms serve as visual guidance for comparing (and selecting) the required privacy budget ε. As both mechanisms build upon the Euclidean distance as a metric, their privacy guarantees should match by using the same privacy budget ε. Directing the substitution to words with a matching grammatical category result in marginal changes to the plausible deniability. This is visually recognizable by the distribution shift. The grammatical constraint risks slightly more self-substitutions and reduced effective support. This is because words are substituted (almost) only by words from the same grammatical category, reducing the pool of unique words that are appropriate for substitution and thus reducing the effective support of the multivariate mechanism. Out of 100 words queried given a fixed privacy budget of ε=10, self-substitution increases on average from about 29 to 32, while effective support decreases on average from about 66 to 61. The fact that both changes in N_w and S_w do not exceed or fall below 50 indicates that plausible deniability is assured for the average-case scenario. We conclude that the grammatically constrained candidate selection does not come at the expense of privacy and can therefore be incorporated into the privatization step without the need to recalibrate the proxies for plausible deniability.
Rather than compromising privacy, our constrained candidate selection can be alternatively viewed as a barrier against reconstruction attacks. Recall that the nearest neighbor search is generalized from k=1 to k≫1. This generalization may impede naïve inversion attacks such as the one proposed in <cit.>, in which an adversary attempts to recover a word by finding the nearest neighbor to the substitute word. Although this inversion attack is not comprehensive, it can be used as a reference point for investigations regarding the robustness of privacy attacks. We include the setup and the results of a membership inference attack in the Appendix <ref>.
§.§ Utility Analysis
To evaluate whether the preservation of syntactic roles translates to better utility in downstream tasks, we conduct experiments with <cit.> on a subset of <cit.>.
Once for each mechanism under comparison, we privatize the training corpus of each dataset. Since the privacy guarantees do not exactly match, we calculate the available privacy budget for each mechanism such that the .90 quantile of words is plausible deniable. This resembles a practical scenario where we allow a negligible subset of words without provable privacy guarantees.
We report the performance scores in Table <ref>. A baseline trained on unprotected data is listed as an upper bound on the performance. All trials mimic the training of the baseline. To privatize the texts in the datasets, we use our modification with a varying candidate pool of size k ∈1,20. Recall that k=1 reduces our modification to the multivariate mechanisms of <cit.>. Although we focus our analysis on a worst-case scenario in which the .90-quantile of words is plausibly deniable, we included test results for an average-case scenario in which only a .50-quantile of words enjoys plausible deniability.
On average, bounds at 81.46% when trained on sensitive text. Compared to the baseline, trained on surrogate texts attains 55.45% when the candidate pool is k=1. By broadening the candidate pool to k=20 and directing the substitution to words with matching grammatical categories, trained on surrogate texts ranks at 60.11%. This corresponds to narrowing down the performance loss by 4.66%.
Contrary to our initial assumption that preserving the syntactic role of words is particularly relevant to sentiment analysis, we find evidence that accounting for syntactic information during privatization benefits a variety of downstream tasks. We conclude that linguistic guidance is a legitimate alternative perspective to previous extensions that focus on the geometric position of words in the embedding.
§ CONCLUSION
Privatizing written text is typically achieved through text-to-text privatization over the embedding space. Since text-to-text privatization scales the notion of indistinguishably of differential privacy by a distance in the geometric space of embeddings, prior studies focused on geometric properties <cit.>.
Unlike prior studies on amplifying text-to-text privatization by accounting for the geometric position of words within the embedding space, we initialized a set of strategies for amplification from the perspective of grammatical properties, such as category, number, or tense.
By incorporating grammatical properties in the form of part-of-speech tags into text-to-text privatization, we direct the privatization step towards preserving the syntactic role of a word in a text. We experimentally demonstrated that that surrogate texts that conform to the structure of the sensitive text outperform surrogate texts that strictly rely on co-occurrences of words in the embedding space.
Limitations. We note that directing the substitution to candidates with matching grammatical categories incurs additional information leakage that is not accounted for by our modification. Too remedy the unaccounted information leakage, one could recast the candidate selection through the exponential mechanism <cit.>.
§ ACKNOWLEDGMENT
We gratefully acknowledge that this research was supported in part by the German Federal Ministry of Education and Research through the Software Campus (ref. 01IS17045).
acl_natbib
§ APPENDICES
§.§ Linguistic Evaluation
Covering three levels of privacy budgets ε, we include the detailed linguistics analysis of the multivariate substitutions obtained from <cit.> in Figure <ref>.
Without a constraint on syntactic roles, we cannot expect the privatization step to yield surrogate texts that conform to the structure of the sensitive texts. From the diagonal, it can be clearly seen that our grammatical constraint retains most grammatical categories across all budget budgets and all types of categories. At a low privacy budget of ε=5, the preservation capability of grammatical categories is 0.4163. At a moderate privacy budget of ε = 10, the preservation capability bounds at 0.8145. At a high privacy budget of ε = 25, the advantage in the preservation capability diminishes as the perturbation probability in general decreases.
§.§ Setup and Results from Membership Inference Attack
To eliminate the possibility that the performance gain is caused by mismatching privacy guarantees, we perform a Membership Inference Attack (MIA) introduced by <cit.>. Given black-box access to a model, an adversary attempts to infer the presence of records from an inaccessible training corpus. We follow the experimental setup of <cit.> for our membership inference attack. To maximize the attack uncertainty, we divide the dataset into four disjoint partitions with an equal number of members and non-members, respectively. The target model is trained on the first partition after privatization by each mechanism, whereas the shadow model is trained on the non-privatized second partition. The shadow model architecturally mimics the target model. We then build an attack model composed of a two-layer multi-layer perception with a hidden size of 64 and non-linear activations. To train the attack model, we feed the logits obtained by the second and third partitions given by the shadow model, where logits from the second first partition are labeled as members and logits from the third partition are labeled as non-members. Once the attack model is trained, we feed the logits of the first partition and the fourth partition obtained by the target model, where logits from the first partition are labeled as members and logits from the fourth partition are labeled as non-members.
We measure the success rate of our membership attack using macro-averaged metrics for precision and recall. Precision captures the fraction of records for which the membership was correctly inferred. Recall captures the coverage of the membership attack. Since the baseline accuracy of the membership attack is 0.5, we consider a randomized mechanism to be provably private if and only if it holds the attack accuracy close to that of random guessing. We report the attack accuracy as the area under the precision-recall curve. We report a non-private membership accuracy of 0.53. Given a practical privacy budget, both mechanisms fluctuate around the 0.5 mark averaged across three independent trials. With no clear hint, we thus conclude that the performance gain induced by a grammatical constraint cannot be attributed to a latent privacy loss.
|
http://arxiv.org/abs/2306.03004v1
|
20230605161432
|
Stratospheric dayside-to-nightside circulation drives the 3-D ozone distribution on synchronously rotating rocky exoplanets
|
[
"Marrick Braam",
"Paul I. Palmer",
"Leen Decin",
"Maureen Cohen",
"Nathan J. Mayne"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
firstpage–lastpage
Stockmayer supracolloidal magnetic polymers under the influence of an applied magnetic field and a shear flow
[
July 31, 2023
=============================================================================================================
Determining the habitability and interpreting future atmospheric observations of exoplanets requires understanding the atmospheric dynamics and chemistry from a 3-D perspective. Previous studies have shown significant spatial variability in the ozone layer of synchronously rotating M-dwarf planets, assuming an Earth-like initial atmospheric composition. We use a 3-D Coupled Climate-Chemistry model to understand this distribution of ozone and identify the mechanism responsible for it. We document a previously unreported connection between the ozone production regions on the photochemically active dayside hemisphere and the nightside devoid of stellar radiation and thus photochemistry. We find that stratospheric dayside-to-nightside overturning circulation can advect ozone-rich air to the nightside. On the nightside, ozone-rich air subsides at the locations of two quasi-stationary Rossby gyres, resulting in an exchange between the stratosphere and troposphere and the accumulation of ozone at the gyre locations. We identify the hemispheric contrast in radiative heating and cooling as the main driver of this ozone circulation. Dynamically-driven chemistry also impacts other tracer species in the atmosphere (gaseous and non-gaseous phase) as long as chemical lifetimes exceed dynamical lifetimes. These findings illustrate the 3-D nature of planetary atmospheres, predicting spatial and temporal variability that will impact spectroscopic observations of exoplanet atmospheres.
Planets and satellites: terrestrial planets – Planets and satellites: atmospheres – Planets and satellites: composition
§ INTRODUCTION
The past two decades have seen the discovery of numerous Earth-size exoplanets, with a substantial fraction of them orbiting in the circumstellar Habitable Zone <cit.>. Earth-size planets are preferentially discovered around M-dwarf stars <cit.>, because they are the most abundant stellar type, have relatively small radii, and are relatively cool, allowing for exoplanets in short-period orbits. The habitability of such exoplanets has been debated in light of the stellar and planetary environments <cit.>. Comprehensive numerical simulations that describe the physical and chemical properties of a planetary atmosphere in such environments are essential to understanding habitability and interpreting spectroscopic observations.
Since M stars are cooler and smaller than other stellar types, a planet in the Habitable Zone orbits at a small orbital distance and feels a strong gravitational pull from the host star. This can lead to spin-orbit resonances for the planet, so-called tidal locking, of which the most extreme case is the 1:1 resonant orbit or synchronous rotation <cit.>. Simulations with General Circulation Models (GCMs) help us understand how synchronous rotation affects the planetary atmosphere and surface habitability. First, synchronous rotation creates distinct hemispheric environments and a large temperature difference between the dayside and nightside <cit.>. Second, synchronous rotation leads to distinct photochemical environments, with strong photochemical production and destruction on the dayside and an absence of photochemistry on the nightside <cit.>. Depending on the rotation period, synchronous rotation can also lead to atmospheric circulation that is characterised by thermally direct circulation for slowly rotating planets <cit.>. The existence of this large-scale circulation requires the Rossby deformation radius to exceed the planetary radius <cit.>, which is the case for planets like Proxima Centauri b, Trappist-1 e to h, LHS-1140 b and GJ 667 C c, assuming an Earth-like atmosphere. The dayside-nightside contrast leads to an overturning circulation, with upwelling on the dayside and downwelling on the nightside <cit.>. This vertical motion results in a superposition of planetary-scale Rossby and Kelvin waves, which drives eddy momentum equatorward <cit.>. A typical part of this wave structure is a pair of quasi-stationary cyclonic gyres on the nightside <cit.>. The equatorward momentum feeds the superrotating jet <cit.>. The overturning circulation is a dominant component of the dayside-to-nightside heat transport <cit.>.
Atmospheric circulation impacts the spatial and temporal distribution of chemical species and other tracers such as clouds <cit.> and photochemical hazes <cit.>. On Earth, the Brewer-Dobson circulation controls the large-scale distribution of chemical tracers such as ozone (O_3) and water vapour in the atmosphere <cit.>. Ozone formation is initiated by photochemistry through the Chapman mechanism <cit.>, which is strongest at tropical latitudes. The Brewer-Dobson circulation describes the ascent of ozone-rich air in the tropics, followed by equator-to-pole transport and descending air motions at high latitudes, leading to meridional variations with a relatively enhanced ozone layer at high latitudes.
<cit.> simulated a tidally-locked Earth using a 3-D climate-chemistry model (CCM), which consists of a GCM coupled to a photochemical network to study the relation between (photo)chemistry, atmospheric dynamics and the thermal structure of the atmosphere. They find a breakdown of the Brewer-Dobson circulation, and instead predict that ozone accumulates on the nightside, where it has a long lifetime <cit.>. <cit.> investigated stratospheric circulation on tidally-locked exoplanets and the potential impact on the distribution of chemical species. For planets with short orbital periods (<25 days), tropical Rossby waves can induce strong equatorial jets in the stratosphere with pole-to-equator transport of airmasses <cit.>. <cit.> showed the meridional distribution of ozone from CCM simulations, confirming that this pole-to-equator circulation essentially confines photochemical species such as ozone to the equatorial regions. The existence of extratropical Rossby waves or damping of tropical Rossby waves prevents this equatorial confinement. Instead, a thermally-driven overturning circulation can drive equator-to-pole transport of photochemical species <cit.>, leading to meridional structure with enhanced ozone at high latitudes. For planets like Proxima Centauri b, <cit.> find a relatively weak tropical Rossby wave, with a thermally-driven equator-to-pole circulation existing in the stratosphere (see their Figure 12). For such planets, the enhanced ozone abundances at high latitudes were later also simulated by <cit.>.
The distribution of radiatively active species such as ozone impacts habitability <cit.>, and will determine what spectroscopic observations of the planetary atmosphere will look like <cit.>. Despite reporting a non-detection for the atmosphere, the observation of TRAPPIST-1 b illustrates the capability of JWST to characterise Earth-size exoplanets <cit.>. For the exoplanets that have an atmosphere we need to understand their 3-D nature, including circulation, clouds, and atmospheric chemistry, which motivates the application of 3-D CCMs to exoplanetary environments. Such simulations of synchronously rotating exoplanets predict a significant zonal structure in the ozone layer for planets around M-dwarfs like Proxima Centauri b <cit.> and haze distribution for hot Jupiters <cit.>. <cit.> found that ozone has a much longer chemical lifetime on the nightside as compared to the dayside of M-dwarf exoplanets. These long nightside lifetimes lead to accumulation of ozone in the nightside gyres, despite the absence of stellar radiation needed to initiate the relevant photochemistry. This spatially variable ozone layer indicates a connection between the photochemically active dayside regions and nightside gyres, which is currently not understood.
In this paper, we aim to understand the dayside-nightside connection and identify the physical and chemical mechanism that drives the spatially variable ozone layer on synchronously rotating exoplanets around M-dwarf stars. We use a 3-D CCM to investigate the spatial and temporal structure of atmospheric ozone, using a configuration for Proxima Centauri b. In Section <ref>, we briefly describe the CCM and introduce metrics used to diagnose atmospheric circulation. This will be followed by a description of the ozone distribution and its relation to atmospheric circulation in Section <ref>. In Section <ref>, we identify a possible driver of the circulation, investigate variability in our simulations and investigate potential observability. Finally, we present the conclusions of our study in Section <ref>.
§ METHODS
This section starts with a description of the 3-D coupled climate-chemistry model. This is followed by the introduction of useful metrics to diagnose the atmospheric circulation and its impact on chemistry in Section <ref>. Finally, we summarize the experimental setup in Section <ref>.
§.§ Coupled Climate-Chemistry Model
The 3-D CCM consists of the Met Office Unified Model (UM) as the GCM coupled with the UK Chemistry and Aerosol framework (UKCA), in the configuration described by <cit.>. UM-UKCA is used to simulate the atmospheric dynamics and chemistry for Proxima Centauri b, but the results apply to other planets in similar orbits around M-dwarf stars. We simulate an aquaplanet with 1 bar or 1000 hPa surface pressure <cit.> and use a horizontal resolution of 2 by 2.5 in latitude and longitude, respectively. The atmosphere extends up to 85 km in 60 vertical levels. We assume that Proxima Centauri b is in a 1:1 resonant orbit around its M-dwarf host star and use the orbital parameters as shown in Table <ref>. The substellar point is located at 0^∘ latitude (ϕ) and 0^∘ longitude (λ).
The UM is used in the Global Atmosphere 7.0 configuration <cit.>, including the ENDGame dynamical core to solve the non-hydrostatic fully compressible deep-atmosphere equations of motion <cit.>. Parametrized sub-grid processes include convection (mass-flux approach, based on ), water cloud physics <cit.>, turbulent mixing <cit.> and the generation of lightning <cit.>. The incoming stellar radiation for 0.5 nm to 5.5 μm is described by the v2.2 composite spectrum for Proxima Centauri from the MUSCLES spectral survey <cit.> and extended to 10 μm using the spectrum from <cit.>. Radiative transfer through the atmosphere is treated by the Suite of Community Radiative Transfer codes based on Edwards and Slingo (SOCRATES) scheme <cit.>. The UM is one of the leading models in predicting the Earth's weather and climate and has been adapted for the study of several types of exoplanets, including terrestrial planets <cit.> but also Mini-Neptunes <cit.> and hot Jupiters <cit.>. Furthermore, the UM was part of the TRAPPIST-1e Habitable Atmosphere Intercomparison (THAI) project <cit.>.
We use UKCA to simulate the 3-D atmospheric chemical composition, by including its description of gas-phase chemistry. UKCA is fully coupled to the UM for large-scale advection, convective transport and boundary layer mixing of the chemical tracers <cit.>. The Fast-JX photolysis scheme is implemented within UKCA, to calculate photolysis rates of chemical species in the atmosphere <cit.>. By taking into account the varying optical depths of Rayleigh scattering, absorbing gases, and clouds from the UM, Fast-JX provides an interactive treatment of photolysis in calculating the 3-D distribution of chemical species in the atmosphere. We distribute the stellar flux from Proxima Centauri over the 18 wavelength bins of Fast-JX, as shown in <cit.> and their Figure 1. These fluxes are synchronised to the orbital distance of Proxima Centauri b which provides an interactive calculation of photolysis rates over the planetary orbit. The chemistry included is a reduced version of UKCA's Stratospheric-Tropospheric scheme <cit.>, including the Chapman mechanism of ozone formation, and the hydrogen oxide (HO_x=H+OH+HO_2) and nitrogen oxide (NO_ x=NO+NO_2) catalytic cycles. This results in 21 chemical species that are connected by 71 reactions. A full list of species and reactions can be found in the appendix of <cit.>.
§.§ Metrics
The meridional circulation is diagnosed using the mean meridional mass streamfunction (in kg s^-1), which calculates the northward mass flux above pressure P:
Ψ_m = 2π R_p cosϕ/g∫^P_0 vdP,
with R_p as the planetary radius, g as the gravitational acceleration and v as the zonal and temporal mean of the northward velocity component at latitude ϕ. Earlier studies using this metric for synchronously rotating exoplanets <cit.> showed 1) the existence of tropospheric Hadley and Ferrel cells transporting heat and mass from the equatorial to polar regions and 2) the impact of orbital configuration on the Brewer-Dobson circulation in the stratosphere <cit.>.
However, with the fixed substellar point of synchronously rotating planets, the mean meridional circulation varies depending on the position relative to the substellar point: for example, the hemispheric mean meridional circulation can vary significantly between the dayside and nightside. The zonal circulation is analogous to the Walker circulation cells on Earth, with rising motion at the location of the heat source, followed by eastward and westward flow aloft and, after descending on the nightside, a return flow along the surface back to the heat source <cit.>. The mean zonal mass streamfunction can be used to calculate the eastward mass flux above pressure P:
Ψ_z = 2π R_p/g∫^P_0 udP,
where u is the meridional mean of the zonal velocity component. For slow rotators, the mean zonal circulation connects the substellar and antistellar points <cit.>. The substellar-antistellar circulation also consists of a cross-polar flow <cit.>.
As elaborated in Section <ref>, the total wind flow on synchronously rotating exoplanets consists of several components. We perform a Helmholtz decomposition of the total wind flow, following <cit.>. This decomposes the total wind flow into its rotational, eddy rotational, and divergent components. The divergent wind mainly drives the substellar-antistellar overturning circulation <cit.>. Since the divergent component is roughly isotropic around the substellar point, we can move from the usual latitude-longitude or geographic coordinate system to a tidally-locked coordinate system <cit.>. The transformation between geographic coordinates and tidally-locked coordinates is illustrated in Figure <ref>. The tidally-locked latitude ϕ' is measured as the angle from the terminator and the tidally-locked longitude λ' is the angle about the substellar point, with the geographic North Pole located at (ϕ',λ')=(0,0) in tidally-locked coordinates. The substellar point and antistellar point correspond to ϕ'=90^∘ and -90^∘, respectively. It was shown by <cit.> that integrating the continuity equation in tidally-locked coordinates over λ' leads to the tidally-locked mean meridional mass streamfunction:
Ψ'_m = 2π R_p cosϕ'/g∫^P_0 v'dP,
where v' is the zonal mean of the meridional velocity component at tidally-locked latitude ϕ'. In this system, the meridional mass streamfunction calculates the mass flux toward the antistellar point (along lines of constant λ'), connecting the substellar and antistellar points and also taking cross-polar flow into account.
Since we are particularly interested in the transport of ozone around the planet, we weight the stream functions using the ozone mass mixing ratio (χ_O3), which is measured as the mass of ozone per unit mass of air in a parcel. This gives us the ozone mass streamfunction:
Ψ'_O_3 = Ψ'×χ_O_3,
which can be applied generally using any of the streamfunctions in Equations <ref>, <ref> or <ref> to give the ozone-weighted meridional, zonal, or the tidally-locked meridional mass streamfunction.
§.§ Experimental Setup
We use the final state of the `Chapman+HO_x+NO_x' simulation from <cit.> for the analysis. The atmosphere was initialized at an Earth-like atmospheric composition, using preindustrial values of N_2, O_2 and CO_2 <cit.>. Water vapour profiles are interactively determined by evaporation from the slab ocean. The HO_x and NO_x species are initialized at mass mixing ratios of 10^-9 and 10^-15, respectively. We report results from our simulation as 600-day mean of the CCM output (equal to ∼50 orbits of Proxima Centauri b) after spinning up for 20 Earth years, to ensure the simulation has reached a dynamical and chemical steady state. The dynamical steady state was determined by the stabilisation of the surface temperature and radiative balance at the top of the atmosphere. The chemical steady state was determined by the stabilisation of ozone as a long-lived species, through the total column and volume mixing ratios. In diagnosing the impact of dynamical processes on the ozone distribution, parts of the spin-up period have also been used to plot the evolution of chemically inert tracers (see Figure <ref> below). The analysis of temporal variability in Section <ref> is based on a 6-day output over 900 days of simulation after reaching a steady state, to ensure we include potential variability at longer timescales.
§ RESULTS
In this section, we start with a brief description of the planetary climate and ozone layer. After that, we discuss the atmospheric circulation followed by its impact on the distribution of ozone around the planet, elaborating on the stratospheric overturning circulation. Lastly, we perform a comparison of relevant lifetimes in the atmosphere.
§.§ Planetary climate and atmospheric ozone
The simulated climate of Proxima Centauri b is broadly similar to that described by <cit.>. Furthermore, the formation of an ozone layer under quiescent stellar radiation is explained in detail by <cit.> and <cit.>. Here, we give a brief description of the details essential for this study. The simulated surface temperature of Proxima Centauri b is shown in Figure <ref>, using a geographic coordinate system in panel (a) and tidally-locked coordinate system in panel (b). Both panels show the dayside-to-nightside contrast characteristic of synchronous rotation, with dayside maxima in surface temperature of up to 289 K and minima of 157 K over the nightside Rossby gyres. Figure <ref>b demonstrates the usefulness of the tidally-locked coordinate system in identifying the dayside-to-nightside contrasts, with the terminator located at ϕ'=0^∘. The horizontal wind vectors are shown at P≈400 hPa, illustrating the tropospheric jet as well as the Rossby gyres on the nightside. The dayside-to-nightside circulation is part of an overturning circulation across multiple pressure levels that will be described in more detail in Section <ref>. At the locations of the nightside Rossby gyres <cit.>, we see the coldest areas on the planetary surface with air that is trapped and subject to radiative cooling. The atmospheric pressure in the gyres is relatively low, like the eye of tropical cyclones <cit.>. The gyres are relatively isolated from the rest of the hemisphere and their edges act as mixing barriers <cit.>. The gyres are a general feature of slowly rotating exoplanets in a synchronous orbit that have a single equatorial jet in the troposphere <cit.>.
We find a spatially variable distribution of ozone in Figure <ref>a, with a relatively thin dayside ozone layer and accumulation of ozone on the nightside. Typical values for the vertically-integrated ozone column on Earth's are 200–400 Dobson Units (DU: 1 DU=2.687×10^20 molecules m^-2), with lower values over the equatorial regions and ozone hole and higher values over high-latitude regions <cit.>. For synchronously rotating planets, most of the dayside ozone column falls within this range. The locations of the nightside Rossby gyres correspond to the maxima in the thickness of the ozone column, reaching up to 1401 DU. The gyres are not fully symmetric, evident from slightly different shapes and the average ozone columns: the area-weighted mean column of the low-λ' gyre (for λ'≤70 and λ'>320^∘) is equal to 626 DU and of the mid-λ' gyre (110<λ'≤220^∘) to 601 DU, both confined between tidally-locked latitudes -60<ϕ'<0^∘. Figure <ref>b shows that the accumulation of ozone at the gyre locations mostly occurs in the lower atmosphere, at pressure levels corresponding to the troposphere (<100 hPa).
The existence of such a spatially variable ozone layer depends on a complex interplay between photochemistry and atmospheric dynamics and changes as a function of incoming stellar radiation and planetary rotation state <cit.>. The production mechanisms for atmospheric ozone are relatively well-understood and due to photochemistry: in the presence of stellar radiation molecular oxygen will dissociate and form ozone through the Chapman mechanism <cit.>. The 3-D impact of M-dwarf radiation on the Chapman mechanism has been explored by previous studies, both in quiescent <cit.> and flaring conditions <cit.>. In all cases, an ozone layer develops around the planet. As such exoplanets are likely to rotate synchronously around their host star <cit.>, stellar radiation and the photochemical production of ozone are limited to the planetary dayside. This is illustrated in Figure <ref>, showing the time-averaged chemical tendency of ozone. The tendency denotes the balance between the production and loss of ozone due to chemical processes. We find that ozone production mainly occurs at high ϕ'>40^∘ (i.e., close to the substellar point), whereas ozone production is practically absent at the locations of the nightside gyres (-60<ϕ'<0^∘). Hence, another mechanism must be driving the relatively enhanced ozone abundances at the locations of the nightside Rossby gyres.
§.§ Overturning circulations
The relationship between the ozone distribution in Figure <ref> and the global atmospheric circulation becomes clear through the mass streamfunctions, as defined in Section <ref>. From left to right, Figure <ref> shows the mean meridional mass streamfunctions Ψ_m, Ψ'_m and Ψ'_m,O_3 that have been calculated from the divergent wind component. A positive streamfunction (red contours) indicates clockwise circulation, and a negative streamfunction (blue) indicates anticlockwise circulation.
From Figure <ref>a, we find strong poleward transport of air at tropospheric pressures (>100 hPa) in a single thermally driven circulation cell <cit.>. Moving up into the stratosphere, we find stacked layers of clockwise and anticlockwise circulation. The existence of poleward transport between ∼50 and ∼1.5 hPa indicates additional thermally-driven circulation cells. These cells transport aerosols and chemical tracers such as ozone from the equator to the poles through the stratosphere <cit.>. This equator-to-pole transport leads to an enhanced high latitude ozone layer on the dayside in geographic coordinates, with mean ozone columns of ∼490 DU above 80^∘ North and South as compared to a mean of ∼290 DU between 10^∘ North and 10^∘ South <cit.>. Since the stellar radiation at the poles is too weak to initiate the photochemistry responsible for ozone production, this polar enhancement has to be due to the poleward transport of ozone produced in the equatorial regions.
Moving to tidally-locked coordinates using Ψ'_m in Figure <ref>b, we find a single overturning circulation cell that dominates the troposphere and transports air and heat from the dayside towards the nightside. A weaker anticlockwise circulating cell is present between the antistellar point and ϕ'≈-30^∘, induced by the temperature gradient between those two points. The absence of anticlockwise motion when moving to lower pressure levels in Figure <ref>b indicates that a connection between the tropospheric cell and the stratospheric circulation exists. An overturning circulation covers essentially all of the stratosphere, connecting the dayside and nightside. Air ascends in the ozone production regions (between 0.2 and 100 hPa, see Figure <ref>) and moves through the stratosphere towards the nightside, where it subsides at the locations of the nightside gyres and thus the locations of ozone accumulation as shown in Figure <ref>.
To quantify the impact of this mass transport on the distribution of ozone, we calculate the tidally-locked ozone-weighted mass streamfunction Ψ'_m,O_3 (Equation <ref>) as shown in Figure <ref>c. From the ozone mass streamfunction we infer that the circulation of ozone through the stratosphere provides a significant contribution to the dayside-to-nightside transport. The downward ozone transport at the ϕ' of the Rossby gyres (-60<ϕ'<0^∘) indicates that this stratospheric dayside-to-nightside circulation drives ozone-rich air into the Rossby gyres and thus leads to ozone maxima on the nightside.
Figure <ref> again shows Ψ'_m,O_3, now separated into 4 ranges of λ'. Each of these λ' ranges corresponds to a distinct feature of the ozone distribution in Figure <ref>a. Figure <ref>a shows the λ'-range that contains the low-λ' gyre (λ'>320^∘ and λ'≤70^∘), and we can identify the dayside-to-nightside transport of ozone-rich air, followed by descending motion at ϕ' corresponding to the location of the Rossby gyres. The ozone is supplied from part of its production region (see Figure <ref>) between pressures of 0.3 hPa and 20 hPa. Figure <ref>b shows the low-λ'-range that does not contain the gyres and instead includes the nightside-to-dayside component of the equatorial jet. Ψ'_m,O_3 shows that there is a stratospheric clockwise circulation, but that this is separated from the lower parts of the atmosphere by an anticlockwise circulation at the ϕ' corresponding to the Rossby gyres and misses part of the ozone production regions between 10 and 100 hPa. Therefore, for 70<λ'≤110^∘, no ozone accumulation is found following the stratospheric overturning circulation. Figure <ref>c again indicates dayside-to-nightside transport of ozone-rich air, with ozone for the mid-λ' gyre (110<λ'≤220^∘) being supplied from the ozone production regions between pressures of 0.3 hPa and 15 hPa. Lastly, Figure <ref>d shows that in the final non-gyre range (220<λ'≤320^∘) there is a stratospheric overturning circulation transporting ozone-rich air, but this circulation misses part of the ozone production region between 0.3 and 10 hPa and is generally weaker than for the ranges containing the gyres. Furthermore, the air that descends below ∼10 hPa will meet the equatorial jet, leading to chemical destruction of ozone (due to HO_x-rich air from the dayside) or advection back to the dayside followed by photochemical destruction. Therefore, this λ'-range is not accumulating ozone in the lower part of the atmosphere.
Our interpretation of the atmospheric dynamics is supported by an age-of-air tracer experiment. In Figure <ref>, we show the zonally-averaged time evolution of the age-of-air-tracer during the model spin-up period. As a passive tracer, it is only affected by dynamical processes in the UM, including both advection and convection. The age-of-air tracer is initialised at 0 s and provides a measure of the amount of time that has passed since an air parcel was last found in the lowest layers of the atmosphere (below ∼2 km or 700 hPa). As such, the tracer measures the time it takes a parcel to rise from these lowest layers into the stratosphere. The tracer values are reset to 0 in the lowest layers at every model timestep. With the evolution of the age-of-air tracer over ϕ' in Figure <ref> we show that air rises over and around the substellar point, already providing much younger air to the dayside troposphere (<15 km) after 10 days of simulation. After 100 days, we find that most of the troposphere has been replenished with much younger air, except for the nightside gyres between -60^∘<ϕ'<0^∘. This picture persists after 500 days, showing that the age-of-air tracer in the nightside gyres is fed by older air from the stratosphere.
To further diagnose the nightside descent of ozone molecules indicated by the streamfunctions, we can define the vertical flux of ozone across pressure or altitude levels as:
F_O_3 = ∫^P_min_P_max (w·n_O_3) dP,
where w is the vertical wind velocity (m s^-1) and n_O_3 the ozone number density in molecules m^-3. Negative values correspond to downward transport and positive values to upward transport of ozone. The integration between pressure levels P_max and P_min is done to determine the total flux exchange between the stratosphere and troposphere. Using the streamfunctions in Figure <ref> and the ozone distribution in Figure <ref>b, we determine that downward transport between ∼200 and 8 hPa drives the ozone accumulation. Figure <ref> shows the vertical flux of ozone, integrated over pressures between 190 and 8 hPa. Generally, we find a relatively small but hemisphere-wide upward flux on the dayside. The nightside gyre locations stand out with a relatively strong downward flux. Hence, the ozone that was produced in the stratosphere will be transported downward into the troposphere at the gyre locations. Combining the streamfunctions, the tracer experiment and the vertical ozone flux, we find that the stratospheric overturning circulation provides a connection between the ozone production regions and the nightside gyres, leading to the accumulation of ozone in the latter. To the authors' knowledge, this is the first time this connection has been reported.
§.§ Dynamical and chemical timescales
In assessing the impact of atmospheric dynamics on chemical abundances, it is important to make a comparison between the timescales of processes that can control the ozone abundance. The dynamical lifetimes include the zonal (τ_u), meridional (τ_v), and vertical components (τ_w), and are calculated following <cit.>:
τ_u = L/u = 2π R_p/u,
τ_v = L/v = π R_p/v,
τ_w = H/w,
with L the relevant horizontal scale in terms of the planetary radius R_p, and H the vertical scale height. The zonal (u), meridional (v), and vertical (w) wind components are all in m/s. For the chemical lifetimes we use:
τ_chem = n_O_3/R_x,
where n_O_3 denotes the ozone number density (molecules m^-3) and R_x the loss of ozone (in molecules m^-3 s^-1) due to reaction x. Specifically, we use the termination reaction of the Chapman mechanism <cit.>:
O_3 + O(^3P) -> O_2 + O_2, (R1)
and the rate-limiting step of the dominant HO_x catalytic cycle <cit.>:
HO_2 + O_3 -> OH + 2O_2. (R2)
A detailed overview of the chemical reactions can be found in <cit.>. We calculate the lifetimes for sets of gridpoints centred at four distinct locations in the ozone distribution (see Figure <ref>), and subsequently take the meridional and zonal mean. These locations cover the substellar point (10 latitudes × 8 longitudes = 80 grid points), the nightside jet (10×7=70 points), and the two nightside gyres with 5×7=35 points each.
Figure <ref> shows the different lifetimes at each of the four locations. From Figure <ref>a we conclude that the dynamical lifetimes are shorter than the chemical lifetimes at all four locations, indicating that dynamics can be an important driver of disequilibrium abundances in this pressure range. In Figure <ref>b we highlight the differences between τ_u and τ_w, for the troposphere (<100 hPa) and lower stratosphere (between 100 hPa and 10 hPa), by using the fraction τ_u/τ_w. Vertical transport is the dominant process for τ_u/τ_w>1 (right of the vertical line) and horizontal transport for τ_u/τ_w<1 (left of the vertical line). Around the substellar point (solid lines), we determine that vertical mixing dominates the troposphere (τ_u/τ_w>1) and that zonal mixing (τ_u) starts to take over at P>80 hPa. Above this pressure, chemical abundances at the substellar point can be spread out zonally towards the nightside, connecting with the ozone-producing region that is part of the overturning circulation from Section <ref>. At the nightside location of the jet, τ_u/τ_w<1, and the zonal wind is capable of homogenising any vertically-driven disequilibrium. The circumnavigating jet then leads to the relatively thin ozone column for 70^∘<λ'<110^∘ and 220^∘<λ'<320^∘ in Figure <ref> (across all ϕ'). At the locations of the nightside gyres, Figure <ref>b shows that τ_u and τ_w are intermittently the smallest, indicating that both vertical and zonal mixing can drive disequilibrium abundances. However, as mentioned in Section <ref>, the edges of the gyres act as mixing barriers. Hence, the zonal transport leads to homogenisation within the gyres. Vertical mixing that is part of the overturning dayside-to-nightside circulation is dominant between ∼200 and 50 hPa at the gyre locations. This vertical mixing drives the observed disequilibrium abundances of tropospheric ozone at the gyre locations, and thus the maximum ozone columns in Figure <ref>a.
§ DISCUSSION
In this section, we start by describing the driving mechanism for the overturning circulation. We then show its impact on other long-lived tracers and discuss relevant temporal variability in the atmospheres of synchronously rotating exoplanets. Lastly, we produce synthetic emission spectra to investigate the observational impact of circulation-driven ozone chemistry.
§.§ Driving mechanism of the overturning circulation
The tropospheric overturning circulation for moist, rocky exoplanets in a synchronous orbit is driven by the absorption of incoming stellar radiation and latent heat release on the dayside, and longwave radiative cooling on the nightside <cit.>. <cit.> study dry, rocky planets rotating synchronously around an M-dwarf star and find that the overturning circulation is indirectly driven by the stellar radiation, in the form of nightside cooling by CO_2. They find that an overturning circulation forms in a N_2-CO_2 atmosphere, but not in a pure N_2 atmosphere <cit.>. Prescribed CO_2 distributions from <cit.> show that shortwave (SW) absorption on the planetary dayside only has a limited impact on the overturning circulation. CO_2 can cool an atmosphere when it is found in layers exhibiting a temperature inversion <cit.>. Enhanced infrared emission from increasing CO_2 levels cools the Earth's stratosphere <cit.>. On synchronously rotating planets, this can induce a downward motion on the nightside that subsequently drives dayside-to-nightside overturning circulation.
Since we focus on the stratosphere, which is relatively dry even for a moist climate of a rocky exoplanet in a synchronous orbit, we can build upon these results in identifying the driving mechanism. The SW atmospheric heating rates in Figure <ref>a show that CO_2 (the green line) acts as an important SW absorber on the dayside. The main absorber in the troposphere is H_2O, whereas CO_2 starts to become dominant above ∼170 hPa. In line with <cit.>, we find that heating due to SW absorption by CO_2 plays a minor role in the troposphere. However, in the stratosphere CO_2 absorption can become important because peak emissions from M-dwarfs are emitted at near-infrared (NIR) wavelengths, relatively long as compared to other stars. CO_2 (and H_2O) have strong NIR absorption bands <cit.>, which explains why CO_2 is the dominant absorbing species above ∼170 hPa, in contrast to ozone in the Earth's stratosphere. As expected, the total dayside heating rates (solid black line) greatly exceed the nightside values (dashed line), forming a direct driver for the overturning circulation. Additionally, Figure <ref>b shows the longwave (LW) heating rates, with negative values indicating cooling of the atmosphere. The black lines show stronger LW cooling on the nightside as compared to the dayside. Again, CO_2 is mainly responsible for these cooling rates, due to its presence in temperature inversion layers at ∼100 and ∼1 hPa. This radiative cooling on the nightside drives a large-scale downwelling which, together with SW heating on the dayside, supports the stratospheric overturning circulation <cit.>, and can explain the ozone maxima at the locations of the nightside gyres. The atmospheric pressure within the gyre is relatively low, analogous to the eye of tropical cyclones <cit.>. Such a pressure gradient naturally induces downward transport at the gyre locations. An important follow-up to this study is to investigate the ozone distribution for a variety of rotation states <cit.> in light of the circulation-driven chemistry proposed here.
§.§ Long-lived atmospheric tracers
The impact of the overturning circulation goes beyond the spatial distribution of ozone, as is also evident from the distribution of the age-of-air tracer as shown in Figure <ref>. Any tracer, gaseous or non-gaseous phase, can continue to circulate as long as its chemical lifetime is much longer than the dynamical timescales. Hence, the overturning circulation is relevant for any so-called long-lived atmospheric tracer. To illustrate this, we performed similar analyses using the species-weighted streamfunction as defined in Section <ref> on the distributions of nitric acid (HNO_3) and dinitrogen pentoxide (N_2O_5). Both of these species are signatures of lightning-induced chemistry in our simulations <cit.>. They are non-radical species with relatively long chemical lifetimes, mainly in the form of photolysis and wet deposition (rainout). In the dayside troposphere, the lifetimes against wet deposition are ∼10^-2-10^2 yr, while higher up in the atmosphere the lifetimes against photolysis are ∼10-10^2 yr. On the nightside, these loss processes are absent and thus their chemical lifetimes approach infinity. We calculate Ψ'_HNO_3 and Ψ'_N_2O_5 similar to Equation <ref>, and calculate the mean of each of the species-weighted streamfunctions over the troposphere (>10^2 hPa) and mid-to-lower stratosphere (1<P<10^2 hPa). The results are shown in Table <ref>.
The circulation cells weighted by HNO_3 and N_2O_5 are strongest in the troposphere, at ∼0.95 and ∼0.04 kg s^-1, respectively, because of the strong overturning circulation here (see Figure <ref>b). The troposphere is also the region where lightning flashes are predicted to occur and thus where HNO_3, N_2O_5, and their precursors are produced <cit.>. The factor 10^6 and 10^7 difference with the ozone-weighted streamfunction in Table <ref> is a consequence of the much lower predicted abundances of HNO_3 and N_2O_5. Moving up to the stratosphere, we find that the ozone-weighted streamfunction is similar to the streamfunction in the troposphere, providing the connection to the nightside gyres. For HNO_3 and N_2O_5, the streamfunction is ∼30 and 150 times lower in the stratosphere, due to low levels of stratospheric HNO_3 and N_2O_5 with the absence of lightning-induced chemistry at those pressure levels. Because of the lack of stratospheric HNO_3 and N_2O_5, the overturning circulation will not be able to accumulate these species at the locations of the nightside gyres (as is evident in the spatial distribution in Figure 10 of ).
In the presence of stellar flares, <cit.> show that the gyres are depleted in ozone (see their Figure 12). This can also be explained by the stratospheric overturning circulation, since flare-induced chemistry will result in a large amount of nitric oxide (NO) and nitrogen dioxide (NO_2) (together known as the NO_x chemical family) at stratospheric levels <cit.>. This NO_x can follow the stratospheric overturning circulation from the dayside to the nightside. Once on the nightside, it can be transported downward at the location of the gyres and locally deplete the ozone through the NO_x catalytic cycle <cit.>, given that flares produce sufficient NO_x.
The impact of the overturning circulation on the distribution of ozone has analogies with studies that simulate tracers in the atmospheres of synchronously rotating hot Jupiters. <cit.> identified dynamical mixing in hot Jupiter atmospheres as a process leading to cold trapping of condensible species on the planetary nightside. Their experiments involve gravitational settling as a source of these condensed particles, which leads to a gradient of tracer abundance, with fewer particles as we move up through the atmosphere. Upward mixing induced by the large-scale dynamics balances the settling of these particles, preventing the complete depletion of particles and inducing a strong spatial variation in the tracer abundances. The extent of the mechanism depends on the strength of frictional drag <cit.>. The mechanism does not require convection but follows the large-scale atmospheric motions that are ultimately driven by the dayside-nightside heating contrast <cit.>, as is the case for the circulation-driven ozone distribution discussed here. Another example of a long-lived tracer is photochemical haze, which is also expected to form at stratospheric altitudes <cit.> and, for synchronously rotating exoplanets, only on the dayside of a planet <cit.>. <cit.> show that the 3-D distribution of small photochemical hazes (≤10 nm) in hot Jupiter atmospheres is also driven by dynamical mixing. The highest tracer abundances are found above the production peak, indicating upwelling on the dayside. Then a divergent flow leads to transport towards the poles and the nightside. On the nightside, the haze particles are then advected downward and get trapped in the mid-latitude gyres <cit.>. These dynamically-induced asymmetries can produce distinctions between a planet's terminator regions, as shown for hot Jupiters <cit.>. Following up on the results presented here, we will investigate the potential terminator variability of the circulation-driven ozone distribution and its observability.
§.§ Time variability
Besides spatial variability in tracer distributions, simulations of synchronously rotating exoplanets exhibit several modes of temporal variability. The formation of the Rossby gyres is due to the thermal forcing asymmetries <cit.>. <cit.> show that these gyres oscillate over longitude λ, with the extent depending on the planet's rotation period and thus dynamical state. Planets with a slower rotation rate have longer oscillation periods, resulting in a 157.5-day oscillation for Proxima Centauri b, which was determined from the temporal evolution of the cloud cover <cit.>.
Since the stellar spectra are constant in time and the planet rotates in a 1:1 resonant orbit without eccentricity and/or obliquity, such variability has to be produced by internal atmospheric variability. <cit.> show that feedback between cloud cover and the incoming stellar radiation can influence the dynamics and drive zonal movement by the gyres, leading to variations in humidity and cloud cover over time. The accumulation of ozone (Figure <ref>) depends on the gyres so we expect there also to be a corresponding variation in atmospheric ozone. To verify this, in Figure <ref> we track the temporal evolution of the tidally-locked coordinates corresponding to the maximum in the ozone layer and the minimum in the vertical flux of ozone (F_O_3, thus corresponding to the strongest downward flux). Figure <ref>a shows ϕ' and Figure <ref>b λ' corresponding to these extrema, and the approximate extents of the gyres are indicated in yellow. The locations of the maximum ozone column and minimum vertical flux are not perfectly aligned, because the maximum ozone column corresponds to a long-term mean location of the gyre and thus depends on vertical fluxes over an extended period of time. The minimum vertical flux represents a snapshot in time and is also impacted by the upward flux from the gyre (see the red regions in Figure <ref>). From Figure <ref>a, we determine that the maximum ozone column is generally found at ϕ' corresponding to the gyre locations, with a small meridional variation over time. The minimum F_O_3 shows more variability in tidally-locked latitude, but the strongest downward flux is generally also located at the gyre locations. In Figure <ref>b, we see the variations in the tidally-locked longitude λ' over time. The low-λ' gyre typically hosts the maximum ozone column, but there are periods when the mid-λ' gyre hosts the maximum in the ozone column. The variations in the minimum F_O_3 broadly align with the maximum in the ozone column, following the gyre position that has the maximum ozone column at that time. The location of minimum F_O_3 shows more variability due to its instantaneous nature.
We translate the temporal variability into simulated observables using the Planetary Spectrum Generator <cit.>. To simulate an emission spectrum that includes half the planetary dayside and half the nightside, we extract the atmospheric pressure and temperature and mixing ratios of relevant chemical species (N_2, O_2, CO_2, H_2O, O_3, N_2O, HNO_3 and N_2O_5) for these locations, take the zonal and meridional averages and compute radiative transfer with PSG. In Figure <ref> we show the resulting planet-to-star contrast for the JWST-MIRI wavelength range, along with a zoom-in that focuses on the ozone 9.6 μm feature. Using extrema in the gyre positions over time from Figure <ref>, we simulate the emission spectra of Proxima Centauri b for different 6-day intervals and indicate the maximum day in the legend of Figure <ref>. We find variations around the ozone features at 9.6 μm and between 14-16 μm that is due to absorption by CO_2, H_2O, and ozone. Hence, the region around 9.6 μm is the place to look for ozone variability. Focusing on the region around 9.6 μm shows that the maximum temporal variations are about 0.5 ppm. Spectroscopic characterisation of these absorption features to the level needed to identify these temporal variations is challenging, as detecting the features themselves would already require many days of co-added observations <cit.>. However, the recent photometric observations of the thermal emission from TRAPPIST-1 b with JWST indicate the telescope's capacity to observe favourable terrestrial exoplanets <cit.>. Mission concepts such as the Large Interferometer For Exoplanets <cit.> further utilise the mid-infrared in the characterisation of terrestrial exoplanets and will have to consider the impact of 3-D spatial and temporal variability in atmospheric dynamics and chemistry.
The hot Jupiter simulations of passive tracers by <cit.> also exhibit significant temporal variability. Oscillations in the equatorial jet and variations in the dayside-to-nightside flow produce large local variations, which could again impact the spectroscopic observations of the planets, both when conducting extended observations and when observing the same object at two different points in time.
Another mode of variability in the atmospheres of exoplanets in synchronous orbits around M-dwarfs is the Longitudinally Asymmetric Stratospheric wind Oscillation <cit.>. Since this entails a stratospheric turnover of wind directions, it could be relevant for stratospheric ozone. Analysing ozone mixing ratios over time, we find variations in the ozone mixing ratios above ∼30 km (or ∼3.5 hPa) as a consequence of the LASO. However, these variations occur higher up in the atmosphere than the overturning circulation that feeds the gyres and thus do not affect the gyre abundances significantly. The variations are interesting from an observational perspective, which we plan to explore as part of an in-depth investigation of the observability of the circulation-driven ozone distribution.
§ CONCLUSIONS
We use a 3-D CCM (UM-UKCA) to study the spatial structure of the ozone layer on an exoplanet rotating in a 1:1 spin-orbit resonance around an M-dwarf star, using the parameters corresponding to Proxima Centauri b. Our results are relevant for similar M-dwarf orbiting planets, specifically for slowly rotating planets with a strong overturning circulation and a single equatorial jet in the troposphere. We investigate the spatial variability in the ozone layer and specifically the accumulation in two nightside ozone maxima, in the form of maximum ozone columns at the locations of the permanent Rossby gyres. Our work builds upon previous studies that have shown that M-dwarf radiation supports the emergence of a global ozone layer.
We show that stratospheric dayside-to-nightside circulation and downward motion over low-pressure nightside gyres can explain the spatial variability in ozone. The photochemistry required to initiate the Chapman mechanism of ozone formation is limited to the dayside hemisphere, with an absence of ozone production on the nightside. We find a connection between the ozone production regions on the dayside and the nightside hemisphere, using the transformation to the tidally-locked coordinate system. Meridional streamfunctions that we calculate from the divergent wind component illustrate the existence of a stratospheric dayside-to-nightside overturning circulation. This circulation consists of a single circulation cell characterized by upwelling motion in the ozone production regions, followed by stratospheric dayside-to-nightside transport and downwelling motions at the locations of the nightside gyres. The downwelling motion produces a flux of ozone from the stratosphere into the troposphere, leading to well-defined maxima in the ozone distribution. The circulation-driven ozone chemistry impacts spectroscopic observations, although the impact of temporal variability is limited to sub-ppm levels in emission spectra.
By investigating the impact of the stratospheric overturning circulation on lightning-induced chemical species (also limited to dayside production, but solely in the troposphere), we can explain why these species do not show a similar accumulation in the nightside gyres. The stratospheric overturning circulation also affects other tracer species, including gaseous chemical tracers and particulate components of photochemical haze, with the only requirement that the dynamical lifetimes are sufficiently short compared to chemical timescales.
We identify hemispheric contrasts in atmospheric heating and cooling rates as the driver for the overturning circulation. Dayside heating can directly drive the overturning circulation, and nightside cooling provides an indirect component by inducing local downward motion. The relatively low atmospheric pressure over the nightside gyres further induces downward motion here. Since the stratosphere is relatively dry, CO_2 absorption is the main contributor to these heating and cooling rates. Ozone absorption also contributes to the rates, but its contribution is weaker than CO_2 since M-dwarf fluxes peak close to absorption bands of CO_2.
For the first time, we find a connection between the ozone-producing dayside of synchronously rotating planets and the simulated ozone maxima on the nightside, covering hemispheric scales and multiple vertical levels in the stratosphere and troposphere. The role of the stratospheric dayside-to-nightside circulation in driving the ozone distribution around the planet illustrates the necessity of 3-D model to capture atmospheric processes correctly. Any robust interpretation of spectroscopic observations will need to understand the spatial and temporal variability of chemical species due to such circulation-driven chemistry.
§ ACKNOWLEDGEMENTS
We are very grateful to Denis Sergeev for his contribution to the coordinate transformations and valuable feedback on the manuscript. MB kindly thanks Ludmila Carone for discussing circulation regimes on synchronously rotating exoplanets.
MB, PIP and LD are part of the CHAMELEON MC ITN EJD which received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 860470. PIP acknowledges funding from the STFC consolidator grant #ST/V000594/1. LD acknowledges support from the KU Leuven IDN grant IDN/19/028 and from the FWO research grant G086217N. MC acknowledges the funding and support provided by the Edinburgh Earth, Ecology, and Environmental Doctoral Training Partnership and the Natural Environment Research Council [grant No. NE/S007407/1]. NM was supported by a UKRI Future Leaders Fellowship [grant number MR/T040866/1], a Science and Technology Facilities Council Consolidated Grant [ST/R000395/1] and the Leverhulme Trust through a research project grant [RPG-2020-82].
We gratefully acknowledge the use of the MONSooN2 system, a collaborative facility supplied under the Joint Weather and Climate Research Programme, a strategic partnership between the Met Office and the Natural Environment Research Council. Our research was performed as part of the project space ‘Using UKCA to investigate atmospheric composition on extra-solar planets (ExoChem)'. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
§ DATA AVAILABILITY
All the CCM data was generated using the Met Office Unified Model and UK Chemistry and Aerosol model (https://www.ukca.ac.uk/https://www.ukca.ac.uk/), which are available for use under licence; see http://www.metoffice.gov.uk/research/modelling-systems/unified-modelhttp://www.metoffice.gov.uk/research/modelling-systems/unified-model. The data underlying this article will be shared on reasonable request to the corresponding author, mainly motivated by the size of the data.
We used the iris <cit.> and aeolus <cit.> python packages for the post-processing of model output. Scripts to process and visualize the data are available on github: https://github.com/marrickb/o3circ_codehttps://github.com/marrickb/o3circ_code.
mnras
|
http://arxiv.org/abs/2306.06865v1
|
20230612044601
|
Deep denoising autoencoder-based non-invasive blood flow detection for arteriovenous fistula
|
[
"Li-Chin Chen",
"Yi-Heng Lin",
"Li-Ning Peng",
"Feng-Ming Wang",
"Yu-Hsin Chen",
"Po-Hsun Huang",
"Shang-Feng Yang",
"Yu Tsao"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"eess.SP"
] |
Deep denoising autoencoder-based non-invasive blood flow detection for arteriovenous fistula
Li-Chin Chen, Yi-Heng Lin, Li-Ning Peng, Feng-Ming Wang, Yu-Hsin Chen, Po-Hsun Huang, Shang-Feng Yang, Yu Tsao, Senior Member, IEEE
This work was supported by the Cheng Hsin General Hospital under Grant CY11018. Shang-Feng Yang and Po-Hsun Huang contributed equally.
Li-Chin Chen is with the Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan ([email protected]).
Yi-Heng Lin is with the Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan, and the Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan ([email protected]).
Li-Ning Peng is with the Center for Geriatrics and Gerontology, Taipei Veterans General Hospital, Taipei, Taiwan, and the Center for Healthy Longevity and Aging Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan ([email protected]).
Feng-Ming Wang is with the Dean of Kai-Yan Clinic, Taipei, Taiwan ([email protected]).
Yu-Hsin Chen is with the School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan ([email protected]).
Po-Hsun Huang is with the Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, the Division of Cardiology, Department of Medicine, Taipei Veterans
General Hospital, Taipei, Taiwan, and the Cardiovascular Research Center, Taipei Veterans General Hospital, Taipei, Taiwan ([email protected]).
Shang-Feng Yang is with the Division of Nephrology, Department of Medicine, Cheng Hsin General Hospital, Taipei, Taiwan, the Departmemt of Clinical Pathology, Cheng Hsin General Hospital, Taipei, Taiwan, and the Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan ([email protected]).
Yu Tsao is with the Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan ([email protected]).
July 31, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Clinical guidelines underscore the importance of regularly monitoring and surveilling arteriovenous fistula (AVF) access in hemodialysis patients to promptly detect any dysfunction. Although phono-angiography/sound analysis overcomes the limitations of standardized AVF stenosis diagnosis tool, prior studies have depended on conventional feature extraction methods, which are susceptible to non-stationarity, incapable of capturing individual patient characteristics, and unable to account for variations based on the severity and positioning of stenosis, thereby restricting their applicability in diverse contexts. In contrast, representation learning captures fundamental underlying factors that can be readily transferred across different contexts. We propose an approach based on deep denoising autoencoders (DAEs) that perform dimensionality reduction and reconstruction tasks using the waveform obtained through one-level discrete wavelet transform, utilizing representation learning. Our results demonstrate that the latent representation generated by the DAE surpasses expectations with an accuracy of 0.93. The incorporation of noise-mixing and the utilization of a noise-to-clean scheme effectively enhance the discriminative capabilities of the latent representation. Moreover, when employed to identify patient-specific characteristics, the latent representation exhibited performance by surpassing an accuracy of 0.92. Appropriate light-weighted methods can restore the detection performance of the excessively reduced dimensionality version and enable operation on less computational devices.
Our findings suggest that representation learning is a more feasible approach for extracting auscultation features in AVF, leading to improved generalization and applicability across multiple tasks. The manipulation of latent representations holds immense potential for future advancements. Further investigations in this area are promising and warrant continued exploration.
Arteriovenous fistula, deep denoising autoencoder, latent representation, pretrain model, representation learning, vascular access surveillance.
§ INTRODUCTION
To ensure optimal dialysis treatment, individuals undergoing hemodialysis (HD) necessitate an adequate vascular access that remains stable over time. The arteriovenous fistula (AVF), an anastomosis expertly crafted between arteries and veins, emerges as the preferred choice due to its diminished morbidity rate and prolonged patency <cit.>. Nevertheless, the occurrence of AVF stenosis resulting from neointimal hyperplasia and subsequent reduction in blood flow can culminate in vascular thrombosis and AVF failure <cit.>. Numerous investigations have reported AVF patency rates ranging between 50% and 80% at the conclusion of the first year following creation, with figures declining to 20% and 60% at the conclusion of the second year <cit.>. Consequently, the preservation of a functional AVF persists as an obstacle for HD patients.
While angiography stands as the definitive method for diagnosing AVF stenosis, it is burdened by invasiveness, high costs, protracted procedures, and associated side effects <cit.>. On the other hand, alternative non-invasive approaches, such as color-duplex ultrasound and physical examination (PE), present themselves as viable options. However, color-duplex ultrasound necessitates the availability of appropriate equipment and proficient personnel, while PE mandates the expertise of skilled operators who employ visual inspection, palpation, and auscultation <cit.>. It is worth noting that PE can be susceptible to operator-dependent variations, leading to mixed outcomes in terms of accuracy in detecting and localizing AVF stenosis <cit.>.
The guidelines established by the Kidney Disease Outcomes Quality Initiative (KDOQI) emphasize the importance of regular monitoring and surveillance of vascular access to enable the timely detection of dysfunction <cit.>. Consequently, there arises a need for a straightforward, cost-effective, and less reliant approach that minimizes the reliance on specific devices and personnel. This would facilitate the seamless implementation of routine auscultation for AVFs.
Phono-angiography/sound analysis, being a non-invasive method, requires portable and cost-effective equipment, which has garnered considerable attention in the development of diagnostic tools for stenosis and thrombosis surveillance. The audible sound generated by turbulent blood flow and vessel vibrations can be analyzed to indicate the state of the fistula <cit.>. Numerous studies have explored diagnostic tools for stenosis surveillance based on distinctive acoustic characteristics <cit.>. However, the extraction and transformation of feature-specific information may suffer from limited generalizability, limiting the applicability of the results in different contexts.
In contrast, representation learning offers a technique wherein a latent, low-dimensional code embedding is learned, capturing the posterior distribution of the underlying factors that explain the observed input. This code can be easily transferred to construct a classifier for other tasks <cit.>. The fundamental idea of this study is to develop an end-to-end, non-invasive technique for detecting AVF blood flow, utilizing representation learning. Such an assistive tool simplifies and standardizes auscultation, rendering it feasible for nephrologists, nurses, and even patients themselves. Additionally, it enables continuous monitoring of the progression of arteriovenous vessels in a non-invasive manner.
§.§ Blood flow of arteriovenous fistula
In a mature AVF, the blood flow typically ranges from 600 to 1200 ml/min <cit.>. Both low and high volumes can lead to undesirable outcomes. Studies have proposed active surveillance and preemptive repair of subclinical stenosis when the blood flow falls below 750 ml/min, aiming to reduce thrombosis rates, costs, and prolong the functional lifespan of AVFs <cit.>. Conversely, blood flow exceeding 1500 ml/min has been associated with an increased risk of distal ischemia, known as steal syndrome <cit.>. This phenomenon affects approximately 1-20% of HD patients with upper-arm AVFs and is characterized by digital coolness, pallor, mild paresthesia, and, in severe cases, tissue necrosis <cit.>. Regular monitoring of blood flow enables the early detection of AVF stenosis, which plays a crucial role in salvaging access function <cit.>.
From an acoustical standpoint, a mature AVF exhibits a low-pitched continuous bruit that can be perceived throughout both systole and diastole, with heightened intensity near the arterial anastomosis. These bruits are the audible sounds originating from the fistula, which can be discerned through a stethoscope <cit.>. Conversely, in the presence of a stenosis, a high-pitched systolic bruit manifests distal to the stenosis, followed by a normal bruit proximally <cit.>.
§.§ Signal characters and feature extractions for AVF
Previous research has revealed significant variations in the acoustical characteristics of AVFs. Among the most frequently discussed acoustical features of AVFs is the pitch resulting from the bruit within the vessel. Some studies <cit.> suggest that a higher degree of stenosis is indicated by a high-pitched bruit. Additionally, certain research indicates that a higher velocity of blood flow corresponds to a higher frequency <cit.>. Other studies identify specific frequencies for stenosis detection, such as frequencies above 200-300 Hz <cit.> or around 700-800 Hz <cit.>. There are arguments proposing the combination of amplitude and frequency information for more comprehensive analysis <cit.>, as well as the simultaneous consideration of time and frequency domain information <cit.>. Conversely, some researchers argue that frequency analysis should differentiate between the systolic and diastolic phases <cit.>.
These varying findings align with the fact that AVF auscultations are subjective, dependent on staff expertise, subject to non-stationarity, specific to individual patient characteristics, and differ based on the severity levels and positioning of stenosis <cit.>.
§.§.§ Various feature extraction transformations
In the evaluation of AVFs, several feature extraction transformations have been employed. These include the fast Fourier transform (FFT) <cit.>, short-time Fourier transform (STFT) <cit.>, wavelet transform (WT) <cit.>, Mel spectrograms <cit.>, and intrinsic mode functions (IMF) <cit.>. Some studies propose combining multiple coefficients, such as incorporating the ratio of frequency power, Mel-frequency cepstral coefficients (MFCC), and normalized cross-correlation coefficient <cit.>, or combining power spectral density (PSD) and wavelet decomposition <cit.>, or utilizing the mean and variation of the center of frequency and energy ratio within a defined frequency band <cit.>. Wang et al. <cit.> further introduced the S-transform, which preserves information from blood flow sounds in both the time and frequency domains simultaneously.
In line with the differences between the systolic and diastolic phases, heartbeats peaks and periods have also been detected <cit.>, proving to be distinguishable when multiple frequency filtering techniques are applied <cit.>. The variations observed among current studies highlight the absence of a consensus regarding an optimal feature extraction transformation.
§.§.§ Diverse classification labels
Another factor contributing to the difficulty in comparing related works is the variability in the AVF vascular access indicators targeted by each study. Some studies classified the fistula into six staff-defined conditions, ranging from the best to the worst condition <cit.>, while others categorized the sounds into five types, including normal, hard, high, intermittent, and whistling <cit.>. Alternatively, some studies employed a binary classification to denote stenosis above or below 50% <cit.>. Other classifications were based on indicators such as a resistance index (RI) above or below 60%, which indicates the difficulty of blood flow to the distal end <cit.>, or a luminal diameter above or below 50% in the AVF vessel, also referred to as the size or width of the vessel <cit.>.
Furthermore, while guidelines recommend weekly surveillance of the vascular access for early dysfunction detection, most studies focused on stenosis based on significant contrast conditions, such as before and after percutaneous transluminal angioplasty (PTA) <cit.>, or stenosis versus non-stenosis <cit.>. These approaches do not align with the objective of early surveillance.
§.§.§ Varied puncture site measurements
The puncture location of the AVF can be categorized into different sites, such as the site of arteriovenous anastomosis (site 1), arterial puncture site (site 2), venous puncture site (site 3), and so on (as illustrated in Fig. <ref>). The anastomotic site is located near the wrist, distal to the heart, while the proximal end of the AVF is situated proximal to the heart <cit.>. Each site exhibits distinct signal characteristics. Some studies focused on analyzing a single site <cit.>, while others measured multiple sites <cit.>; however, the discussions predominantly revolved around the characteristics of each site individually. The combination or utilization of information from different sites has not been thoroughly explored.
§.§.§ Limited sample size
The evaluation of AVF was typically conducted by healthcare professionals, and often only the assessment results were recorded. The use of stethoscope recordings was not a regular practice. Furthermore, the collection of stethoscope recordings required a quiet room to minimize background noise. The quality of the stethoscope and the recording equipment could also impact data collection and study quality. As a result, the collection of stethoscope recordings during AVF auscultation was typically done in a trial-based scenario, which added to the practical burden and resulted in a limited number of sampled data. The recruited patient sizes in the studies ranged from 5 to 74 <cit.>, which is considered small sample size for machine learning applications.
To address the aforementioned constraints in current research, this study employs deep neural networks to overcome the challenges with the following design:
* Initially, a pretrain model is trained for dimensionality reduction and reconstruction. The latent representation learned from this model serves as efficient acoustic features, accommodating the non-stationary and patient-specific characteristics of auscultation recordings. The generalizability of this representation is also assessed.
* Blood flow measurement, a widely recognized indicator for vascular access, exhibits strong predictive power in early detection of dysfunction and AVF complications <cit.>. Hence, it is adopted as the prediction label in this study.
* Information from different puncture sites is analyzed, and the combination of latent information is explored.
* To mitigate the limitation of a small dataset and enhance robust representation learning, a noise-mixing approach is employed to augment the dataset.
* Considering the applicability of the proposed method to lightweight devices with limited computational capabilities, such as stethoscopes or wearable devices, a further dimensionality reduction technique is demonstrated while preserving approximate predictive ability.
§ METHODS
In this study, we propose a representation learning approach based on the architecture of a deep denoising autoencoder (DAE), as depicted in Fig.<ref>. The DAE is trained to perform dimensionality reduction and reconstruction tasks using the waveform of one-level coefficients obtained after applying discrete wavelet transform (DWT). The latent representation generated by the DAE is utilized for phono-angiography analysis in the downstream task.
§.§ Deep autoencoder and deep denoising autoencoder
Deep autoencoder (AE) neural networks <cit.> are feed-forward multi-layer neural networks that aim to reconstruct the input data itself. The AE consists of an encoder and a decoder. The encoder, denoted as f_θ, transforms the input x into a hidden representation y through a deterministic mapping:
f_θ(x) = s(Wx+b),
where s(·) is a non-linear activation function, and θ = {W,b} represents the weights and biases of the encoder. The decoder, denoted as g_θ^', reconstructs the hidden representation y into a latent space z:
z = g_θ^'(y) = s(W^'y+b^'),
where θ^' = {W^',b^'} represents the weights and biases of the decoder. The goal of the autoencoder is to minimize the discrepancy between the original input x and its reconstructed output z <cit.>.
The constraint of reducing dimensionality at the bottleneck layer separates useful information from noise and less informative details. This dimensionality reduction helps remove irrelevant variations and focuses on the essential aspects of the data, enhancing classification performance. By training an AE on a reconstruction task, the bottleneck layer becomes specialized in encoding discriminative features, which can be highly informative for other classification tasks.
DAE takes a further approach to reconstruction by considering noisy inputs <cit.>. The DAE introduces a noise-adding step to the initial input x, denoted as noisy input x. The encoder then maps x to a hidden representation:
y = f_θ(x) = s(Wx+b).
The decoder reconstructs the clean input x from the hidden representation y:
z = g_θ^'(y).
The parameters θ and θ^' are trained to minimize the average reconstruction error over a training set, aiming to make the reconstructed output z as close as possible to the clean input x.
The use of a denoising autoencoder allows the model to learn robust representations that are less sensitive to noise and variations in the input data. It can help to capture essential features while discarding irrelevant details and noise, leading to improved classification performance.
§.§ Experimental design
§.§.§ Patient recruitment
Patients were recruited from the HD unit of Chung-Hsin General Hospital. The inclusion criteria for the study were patients with functioning AVFs who were undergoing regular HD treatment. Exclusion criteria included age younger than 20 years, unwillingness or inability to undergo scheduled exams or follow-up regularly, and inability to provide written informed consent. After enrollment, electronic stethoscopes were used to collect auscultation signals at three sites of a mature AVF: arteriovenous anastomosis (site 1), arterial puncture site (site 2), and venous puncture site (site 3). The relative positions of these sites are shown in Fig. <ref>. AVF blood flows were measured using a Transonic® Flow-QC® Hemodialysis Monitor, which introduced saline into the venous line with the dialysis lines in a normal position. Recirculation was calculated based on the change in blood concentration between the venous sensor and arterial sensor <cit.>. The study was approved by the institutional review board of Chung-Hsin General Hospital (817) 109A-56, and all procedures were conducted in accordance with the principles outlined in the Declaration of Helsinki. Informed written consent was obtained from all participants prior to enrollment.
Since the measured blood flow indicates the vessel access between site 2 and 3, this research focused on the auscultation recordings from site 2 and 3. A total of 199 patients were initially recruited, but patients who lacked labels or had incomplete recordings (e.g., lack of recordings from site 2 or 3) were excluded from the analysis. Ultimately, 171 patients were included in the study. The blood flow detection task was designed as a three-class classification, categorizing blood flow into <750 ml/min, 750-1500 ml/min, and >1500 ml/min, representing different clinical requirements for HD patients.
§.§.§ Data preprocessing
The audio files were saved in the waveform audio file format (.wav) and digitized using a 16-bit analog-to-digital converter with a sample rate of 8 kHz. Preprocessing steps were applied to the audio recordings as follows: (1) Blank gaps before and after the auscultation sounds were removed. (2) The amplitude of the recordings was normalized. (3) The middle part of the recordings was segmented to avoid artifacts caused by placing and removing the stethoscope.
To extract pitch-specific acoustic features, the DWT was applied using a biorthogonal wavelet. Three levels of coefficients were obtained after the low-pass filter (LPF) and were denoted as w_L1, w_L2, and w_L3. The waveform, FFT, and STFT were computed for the original signal and the three level coefficients, as shown in Fig. <ref>. The FFT and STFT representations were then normalized using the absolute values and the natural logarithm of one plus the input (log1p) transformation. The log1p transformation ensures that the values are above zero and normalizes potential errors that could be introduced by the distribution between positive and negative values <cit.>.
To overcome the limitation of a small number of samples, the auscultation audio recordings were mixed with seven different types of noise, including five colored noises (white, blue, violet, brown, and pink noises) and two types of background noise with people talking. The noise was added at two different volumes, resulting in a total of 2,394 mixed noisy auscultation recordings for site 2 and 3, respectively.
§.§.§ Model design
The architecture of the encoder and decoder for one-dimensional signals, which include waveform and FFT, consisted of three layers of fully-connected layers with Leaky Rectified Linear Unit (LeakyReLU) activation functions between each layer. The output sizes of the encoder layers were set to 5000, 1000, and 100, while the decoder layers had output sizes of 100, 1000, and 5000. For two-dimensional signals such as STFT, the architecture involved three layers of one-dimensional Convolutional (Conv1D) layers and max-pooling layers with Rectified Linear Unit (ReLU) activation functions between them. The encoder layers had filter sizes of 64, 32, and 16, and the decoder layers had filter sizes of 16, 32, and 64. The kernel sizes for the max-pooling layers were set to 2. The model was trained using the Adam optimizer and the mean squared error (MSE) loss function, aiming to minimize the discrepancy between the input and output. As the downstream task involved a small number of samples, the training was based on a Radial Basis Function (RBF)-kerneled Support Vector Machine (SVM) <cit.>. The cost parameter (C) was set to 10 for the SVM.
§.§.§ Training strategies and classification metrics
The data were initially split into training and testing datasets for the pretraining of the encoder and decoder. The latent representation generated during the testing phase served as the dataset for training the downstream task. This dataset was then further divided into training and testing sets after balanced sampling. All downstream tasks were performed based on balanced sampling. The training process is illustrated in Fig. <ref>.
The quality of the learned latent representation was assessed using discrimination analysis. The autoencoder (AE) was set as the baseline and trained to reconstruct both the original clean signals (clean-to-clean) and the noise-mixed signals (noisy-to-noisy). The comparison between the two reconstructions indicated the effectiveness of enlarging the dataset. Additionally, the DAE was compared as it reconstructed clean audio from noisy audio, demonstrating the effectiveness of the asymmetric input and output approach.
To conduct a comprehensive analysis, various feature extraction methods proposed previously were tested for blood flow detection, including S-transform <cit.>, IMF <cit.>, and Mel spectrogram <cit.>. The latent representation from different combinations of sites 2 and 3 were examined, as well as their individual representations. Furthermore, the generalization ability of the latent representation was assessed by predicting patient-specific information such as gender, hypertension (HTN) diagnosis, and diabetes mellitus (DM) diagnosis.
To achieve further dimensionality reduction on portable devices, the latent representation was condensed using principal component analysis (PCA) to lower dimensionalities. It was then concatenated with demographic information, including gender, age, HTN, and DM, as shown in Fig. <ref>. Categorical variables were encoded using one-hot encoding, and numeric variables were normalized using the log1p function. Three algorithms based on different mechanisms were examined: SVM, k-nearest neighbors (KNN) <cit.>, and Light Gradient Boosting Machine (LightGBM) <cit.>. The number of cluster in KNN were set to 3; The number of boosted trees in LightGBM was set to 100, and the learning rate was set to 0.05.
The reported classification metrics included the area under the receiver operating characteristic (AUROC) curve, accuracy, sensitivity, specificity, precision, and F1 score <cit.>. These metrics were calculated by averaging the performance over ten runs of the training and testing process. Taking the mean performances of the testing samples over multiple runs was considered more representative than a single validation result. Since all the metrics are percentage indicators and higher scores represent better performance, a general average score (Avg.) was calculated to provide an overall measure of performance across all metrics. This helps assess the overall performance of the model.
§ RESULTS
Table <ref> showcases the demographic profile of the recruited patients. A majority of the recruited patients were of the male gender (59.65%), with diagnoses of HTN (72.52%) and DM (64.91%). The average age of the patients was 67 years, with the majority falling within the range of adequate blood flow (47.95%). Fig. <ref> showcases the visualization of an audio sample in its original form, as well as w_L1, w_L2, and w_L3, respectively. Each level progressively filters out more information from the higher frequency band.
By comparing Table <ref>(a) (n = 171) with (b) (n = 2,394), it becomes evident that augmenting the dataset through the noise-mixing approach proves to be efficient and significantly enhances the discrimination capabilities of the latent representation. Furthermore, a comparison between Table <ref>(b) and (c) highlights the impact of an asymmetric input and output. It is noteworthy that the noise-to-clean scheme exhibits the ability to improve performance. Employing a lower level of DWT does not necessarily lead to performance improvement.
The most exceptional latent representation, which serves as the baseline for subsequent comparisons, is generated by the DAE using w_L1, achieving an average score of 0.95. The representations learned from the waveform, FFT, and STFT of w_L1 were compared (presented in Table <ref>). The results indicate that the waveform is the most viable feature for generating a discriminative latent representation for downstream task classification.
Table <ref>(a) presents the outcomes obtained by applying previously proposed methods in our scenario. However, none of these methods exhibited sufficient distinctiveness for AVF blood flow detection. In Table <ref>(b), we showcase the downstream task performance based on the representation of individual sites as well as the subtraction of site 2-3. While each individual site achieved a satisfactory accuracy of 0.90, the subtraction of site 2-3 surpassed other combinations, such as concatenation and addition. Table <ref>(c) provides an overview of the performance when patient-specific information was set as the classification target. All approaches yielded commendable performance, surpassing an average score of 0.93.
Utilizing the original latent representation (dim = 100) as a reference, Table <ref>(d) exhibits the outcomes obtained by reducing the dimensionality to 5, 4, and 2. As the dimensionality decreased, the performance also declined. However, when the condensed representation (dim = 2) is concatenated with the demographic information, the performance can be restored to a level approximating the best-performing version using LightGBM, as illustrated in Table <ref>(e). Fig. <ref> provides a visualization of the feature importance as determined by the tree-based algorithm employed for branching. The results indicate that the model places the highest value on the condensed representation, followed by age, the presence of DM, gender, and HTN diagnosis.
§ DISCUSSION
Prior investigations <cit.> have underscored the perils of training deep neural network models directly on the supervised target through gradient descent, as random initialization may not yield optimal performance. Conversely, commencing the training process with a pre-trained model has demonstrated its efficacy in enhancing generalization. In our study, we employed DAEs to generate a distinctive representation well-suited for detecting blood flow. Our findings indicate that representation learning presents a more viable approach for extracting auscultation features in AVF. Feature extraction methods based on signal characteristics may exhibit high specificity to particular prediction scenarios and lack ease of transferability to other contexts. In contrast, the acquired latent representation showcases improved generalization and applicability in non-extreme contrast scenarios (e.g., stenosis and non-stenosis) as well as other patient-specific characteristics. While our study did not simultaneously collect different access indicators, such as RI or luminal diameter, to validate transferability, our findings suggest that a well-learned representation captures additional patient-specific information that can be transferred to multiple tasks. Furthermore, AVF blood flow serves as a more comprehensive measurement aligning with the need for early surveillance and supporting the detection of stenosis and other dysfunctions <cit.>. Additionally, we have successfully applied the proposed architecture to pathological voice quality detection, yielding satisfactory outcomes <cit.>.
Effective representation learning necessitates a sufficient amount of data. Despite our relatively large sample size of recruited patients, the original clean signal alone was insufficient to generate a well-learned representation. However, augmentation methods, such as the noise-mixing approach, offer a simple and feasible solution to overcome limitations in data size. Forcing the model to reconstruct the clean signal from the noisy signal proves to be an effective approach for generating a more representative representation. Our results demonstrate that the time domain information captured in the waveform is adequate for generating a well-learned representation, whereas the frequency domain information and time-dependent windows converted using FFT and STFT do not appear to be essential. Previous studies have also highlighted the sufficiency of time domain information for turbulent sound analysis <cit.>, while additional information in the FFT window may introduce noise and lead to averaging within <cit.>. The fixed window width of STFT may not be ideal for accurately tracking dynamic signals <cit.>. Moreover, the inclusion of additional, less informative details can impede precise reconstruction, thereby generating a less representative representation. While a one-level DWT discards less informative details, excessive information loss (e.g., w_L3) hampers prediction performance.
The intensity of the bruit is most pronounced near the arterial anastomosis (site 1), followed by the arterial and venous puncture sites (site 2 and 3). Subtracting the latent representation of site 3 from site 2 effectively indicates the blood flow loss in between, thus demonstrating distinguishable results. Previous works <cit.> have demonstrated the possibility of reconstructing diverse images of real subjects through interpolations in latent space, highlighting the feasibility of manipulating latent representations to generate targeted outcomes.
Excessive dimensionality reduction at the bottleneck of AEs may hinder reconstruction performance. Consequently, we opted to condense the dimensionality after generating adequate representations. Our results illustrate that prediction performance can be restored by concatenating a vector of six elements using an appropriate machine learning method. The concatenated vector comprises heterogeneous information, necessitating the identification of a threshold to discriminate between different categories. This task can be effectively handled by tree-based algorithms <cit.>. The condensed representation continues to exhibit greater discriminative power compared to other variables, as indicated by its high value in tree-based algorithms. Furthermore, numeric variables tend to demonstrate higher discriminative capability than categorical variables.
§ CONCLUSION
Our study showcased the effectiveness of representation learning using DAEs for non-invasive AVF blood flow detection. This approach proved to be highly accurate and capable of capturing patient-specific information, enabling its application in various contexts. Furthermore, the learned representations maintained high performance even under highly condensed conditions. The manipulation of latent representations holds great promise for future advancements. Further exploration of the generated latent representation can enhance the development of smart stethoscopes and pave the way for future applications.
(1) Instead of articulately defined thresholds, we proposed to use MFCC and multimodal fusion learning, which extract features based on learn-able tendency by deep neural networks.
and the training dataset were then augmented 25 times to increase the training dataset to approximate 6000. We adopted SpecAugment <cit.> for augmentation, applying frequency masking and time masking policy, which masked channels and time steps randomly. We adopted the five-fold cross validation strategy.
Our model were designed for binary classification. The classification label were set to the median of the recruited data. In our case, blood flow above and under 1060 ML/min.
We used a convolutional-based feature extraction encoder (FEE), which contains a two-dimensional convolutional (Conv2D) layers, a rectified linear unit (ReLU) activation function, a max-pooling layer, and a dropout layer. The extracted latent feature (LF) then entered the classification unit (CU) to reach a final prediction. The CU includes a fully connected (FC) layers, a ReLU, a dropout layer, and a softmax layer. We examine the performance of single site, and the combination of two and three sites. For multiple site combinations, the LF for two and three sites were concatenated before entering CU (including combinations of LF_site1⊕ LF_site2, LF_site2⊕ LF_site3, LF_site1⊕ LF_site3, and LF_site1⊕ LF_site2⊕ LF_site3). Fig. <ref> demonstrates the model design during triple site fusion. L2 regularization was also adopted, having λ set to 0.005.
IEEEtran
|
http://arxiv.org/abs/2306.05737v1
|
20230609080454
|
The broadening of the main sequence in the open cluster M38
|
[
"M. Griggio",
"M. Salaris",
"L. R. Bedin",
"S. Cassisi"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.GA",
"astro-ph.IM"
] |
firstpage–lastpage
Integrating Usage Control into Distributed Ledger Technology for Internet of Things Privacy
Preprint - To be published in IEEE Internet Of Things journal
Nathanaël Denis, Maryline Laurent, Sophie Chabridon
July 31, 2023
==========================================================================================================================================================
Our recent multi-band photometric study of the colour width of the lower main sequence of the open cluster M37 has revealed the presence of a sizeable initial chemical composition spread in the cluster.
If initial chemical composition spreads are common amongst open clusters, this would have major implications for cluster formation models and the foundation of the chemical tagging technique.
Here we present a study of the unevolved main sequence of the open cluster M38, employing
Gaia DR3 photometry and astrometry, together with newly acquired Sloan photometry. We have analysed the distribution of the
cluster's lower main sequence stars with a differential colour-colour diagram made of combinations of Gaia and Sloan magnitudes, like
in the study of M37.
We employed synthetic stellar populations to reproduce the observed trend of M38 stars in this diagram, and found that
the observed colour spreads can be explained simply by the combined effect of differential reddening across the face of the cluster and
the presence of unresolved binaries.
There is no need to include in the synthetic sample a spread of initial chemical composition as instead necessary to explain the main sequence of M37.
Further photometric investigations like ours, as well as accurate differential spectroscopic analyses on large samples
of open clusters, are necessary to understand whether chemical abundance spreads are common among the open cluster population.
stars: abundances – open clusters and associations: individual: M38 – binaries: general – techniques: photometric
§ INTRODUCTION
Open clusters have been traditionally considered to host populations of stars born all with the same initial chemical composition in a burst of star formation of negligible duration (simple stellar populations).
The recent discovery of extended turn offs (TOs) in the Gaia colour-magnitude diagrams (CMDs) of a sample of about 15 open clusters with ages in the range ∼ 0.2-1 Gyr and initial metal mass fractions Z between ∼ 0.01 and ∼ 0.03
<cit.> has somehow challenged this paradigm, given that extended TOs can be naturally explained by a range of ages amongst the cluster's stars <cit.>.
Further detailed studies of the extended TO phenomenon, which is seen also in CMDs of Magellanic Clouds' clusters younger than 2 Gyr <cit.>, strongly point to the effect of rotation
<cit.>
as the main culprit <cit.>.
In this case, stellar populations in individual open clusters might still be simple stellar populations, born with uniform age and initial chemical composition.
Very recently, our photometric multi-band study of the main sequence (MS) of the open cluster M37 <cit.> has disclosed the presence of a sizeable initial chemical composition spread in the cluster (either a full metallicity range Δ[Fe/H] ∼ 0.15 dex or a helium mass fraction total range Δ Y ∼ 0.10). This result is independent of whether rotation or age spread is responsible for its observed extended TO, because it is based on an analysis of the lower MS, populated by stars with convective envelopes that are anyway slow rotators.
This result has important implications for our understanding of open cluster formation <cit.> and
the technique of chemical tagging of Galactic field stars <cit.>, especially if high resolution spectroscopic investigations of M37 will disclose that the chemical spread is due to an inhomogeneous initial metal content.
Indeed, the basic idea of chemical tagging is that stars are born in unbound associations or star clusters (like open clusters) that disperse rapidly, and over time they populate very different parts of the Milky Way phase space; stars of common birth origin should however be identifiable through their measured photospheric abundances, in the assumption that their birth cluster has a chemically homogeneous composition.
It is therefore important to assess whether initial abundance spreads among the Galactic open clusters are a common phenomenon.
In this paper we have investigated the poorly-studied open cluster M38 that, like M37, displays in the Gaia Data Release 3
<cit.> CMD a MS broader than what is expected from photometric errors only.
We have applied the same multi-band technique developed for M37 that combines both Gaia and Sloan photometry, to assess whether
the broadening of the MS can be explained by differential reddening and binaries only, or whether a chemical abundance spread is also required.
The plan of the paper is as follows. Section <ref> presents our membership analysis and the resulting Gaia DR3 CMD, and is
followed by Section <ref> which describes the complementary Sloan photometry used in this work.
Section <ref> describes the theoretical analysis of the MS width
and Section <ref> closes the paper with our conclusions.
§ THE GAIA COLOUR-MAGNITUDE DIAGRAM
The analysis of the CMD diagram of a star cluster requires a sample of member stars free from
field sources contamination. To obtain such a sample we have derived the membership probabilities for
all the sources in the Gaia DR3 catalogue within a circle with a ∼ 1.5 deg radius,
centred on the cluster <cit.>.
The membership probabilities were computed following the approach described by
<cit.>, which relies on Gaia DR3 astrometry. Cluster members were
selected by performing a series of cuts on the astrometric parameters, as displayed
in Fig. <ref>. In the top left panel we show the membership probability
P: we applied a cut by-eye, following the profile of the bulk of sources with cluster
membership at each magnitude (dashed-red curve). This selection becomes less strict
at fainter magnitudes, as the measurement errors increase and memberships
become less certain.
We then applied a cut on the parallax
(top right) and proper motions (bottom left) distributions. The red lines were
defined by the 68.27^ th percentile of the residuals around their median value
in each 1-magnitude bin, multiplied by a factor of two <cit.>.
The bottom right panel show the spatial distribution of the selected members.
The derived list of probable cluster members allowed us to estimate the
cluster astrometric parameters; we followed the same procedure as in
<cit.>, by applying some quality cuts to the Gaia data, i.e.:
- <0.25;
- <1.4;
- =0;
- <4;
- σ_ϖ/ϖ<0.1, σ_μ_α/μ_α<0.1 and σ_μ_δ/μ_δ<0.1.
With this selected sample of members we have estimated the cluster's mean proper
motion and parallax. The mean values in each magnitude interval are shown
in Fig. <ref>, with the weighted average reported on the top right corner of each panel.
The cluster parameters are also reported in Table <ref>
The average parallax gives a distance d of 1132±2 pc, that,
accounting for the
parallax zero-point correction by <cit.>, becomes 1183±2 pc.
In the following, we consider this correction to represent a maximum error in the distance, hence
d=1130±50 pc. Our estimate is also in agreement, within the errors, with the distances given by <cit.> which provide a median value for M38 stars equal to 1186±2 pc.
The CMD of the selected cluster members is shown in Fig. <ref>.
The MS is very well-defined and does not exhibit a clear extended TO.
However, a detailed analysis of the TO region is hampered by the fact that there are only 40 MS stars with G>12.
The metallicity of this cluster is not well determined, given that spectroscopic analyses of small samples of cluster stars have provided a range of [Fe/H] determinations
between ∼ -0.07 and ∼ -0.38, and
E(B-V) estimates range between ∼ 0.25 and ∼ 0.35 mag <cit.>.
In the same Fig. <ref> we show for reference
a 300 Myr[This value is consistent with the age of 302 Myr assigned to this cluster by <cit.>.] BaSTI-IAC <cit.> solar scaled isochrone with [Fe/H]=0.06, matched to the
blue edge of the lower MS (see below and Sect. <ref> for the definition of lower MS and its blue edge). We adopted
the distance d=1130 pc, and for the assumed metallicity we determined E(B-V)=0.26 from the match to the lower MS colour, which can be considered to be the minimum value of the
reddening, given the presence of differential reddening across the face of the cluster, as discussed later in Sect. <ref>.
We employed extinction coefficients
in the Gaia bands obtained from the relations given in the Gaia
website[<https://www.cosmos.esa.int/web/gaia/edr3-extinction-law>].
We tried also isochrones with lower [Fe/H] more in line with the uncertain spectroscopic estimates ([Fe/H]=-0.08 and -0.20 dex). After adjusting (actually increasing) E(B-V) to match the blue edge of the lower MS, and the isochrone age to approximately reproduce the observed brightness of the TO region, the fit of the upper MS was poorer when considering these subsolar metallicities.
We stress at this stage that –as for the case of M37–
the results of the analysis in Sect. <ref> are insensitive to the exact values of the adopted isochrone metallicity, the cluster distance (within the adopted error bar), and the minimum value of E(B-V), because of the differential nature of the technique applied.
§.§ The width of the MS
As discussed for M37 <cit.>, if open clusters host single-metallicity populations,
the observed colour width of the unevolved MS is expected to be set by
the photometric error, the presence of unresolved binaries with a range of values of the mass ratio q, and the differential reddening across
the face of the cluster, if any.
To verify this hypothesis in the case of M38, we have followed the same procedure
detailed in <cit.>.
In brief, we first calculated an observed fiducial line
of the unevolved MS in
the G-magnitude range between 15.2 and 16.6 (denoted as lower-MS from now on). According to the isochrone
in Fig. <ref>, in this magnitude range the single star population covers a mass range between ∼ 0.9 and 1.15 M_⊙, approximately the same range as in our analysis of the lower MS of M37 <cit.>.
We have calculated the fiducial line assuming that the observed MS is populated just by single stars all with the same initial metallicity, as
described in <cit.>.
Synthetic stars have been then distributed with uniform probability along this fiducial; each synthetic magnitude
has been then perturbed by a
photometric error obtained by randomly sampling a Gaussian probability distribution with zero mean and a standard deviation set to the median error at the corresponding
G-magnitude, taking advantage of the individual errors from the Gaia DR3 catalogue.
Figure <ref> (top panels) compares the observed CMD (left) with the simulated counterpart (right) in the selected magnitude range, and the colour residuals around the fiducial line
as a function of G (bottom panels).
We also show the values of the colour dispersion around the fiducial values at varying magnitudes in both CMDs. They have been computed as the 68.27^ th-percentile of the distribution of the residuals around zero.
Notice that we have discarded objects with the position in the CMD consistent with being
unresolved binaries with mass ratio q > 0.7 (according to the adopted isochrone), when we calculated the dispersion of the residuals from the observations. But
even neglecting these objects, it is clear from Fig. <ref> that the simulated stars display a much narrower distribution around the fiducial line than the observations.
To assess the origin of the colour spread of the observed CMD, we employed an auxiliary photometry in the Sloan ugi filters –described in the following section– and applied in Sect. <ref> the same technique developed in <cit.>.
§ SLOAN OBSERVATIONS AND DATA REDUCTION
The data were collected with the Asiago Schmidt telescope between October, 2 and November, 15 2022.
We obtained a set of 57 images in the Sloan-like filters ugi, with an exposure time of 400 s.
The images were dithered to mitigate the effect of bad pixels and cosmic rays, and covered a total
area of about 1 sq. deg. The observation log is reported in Table <ref>. A three colour stack of
the field of view is shown in Fig <ref>, where we used the u filter for the blue colour, g for
the green and i for the red colour.
To measure position and flux of the sources in this dataset we followed the same approach as in <cit.>. Briefly,
we first derived a grid of 9×9 empirical point-spread functions (PSFs) for each image
considering bright, isolated and unsaturated sources,
by using the software originally developed by <cit.>.
The grid is necessary to account for spatial variations of the PSF across the CCD.
We then proceeded by measuring the position and flux of individual sources in each image with the appropriate local PSF, obtained by a bilinear interpolation between the four nearest PSFs in the grid, using the software
described by <cit.>.
This routine goes through a series of iterations, finding and measuring progressively fainter sources,
until it reaches a specified level about the sky background noise. The software outputs a catalogue with
positions and instrumental magnitudes for each image.
We transformed the positions and magnitudes of each catalogue to the reference system defined by the first image in each filter (namely, ,
and ).
Finally, we cross identified the sources and produced a catalogue
containing the averaged positions and magnitudes for all the stars
measured in at least five exposures. These catalogues were matched
with the Gaia one, to have Sloan magnitudes
for all the Gaia sources detected with the Schmidt telescope.
The instrumental magnitudes have been calibrated as in <cit.>
exploiting the IGAPS catalogue <cit.>. We cross identified
our sources with those in the IGAPS catalogue, and derived the coefficients of
the relation m_ cal=m_ instr+a(g_ instr-i_ instr)+b
with a linear fit.
The CMD of member stars in the ugi filters is shown in Fig. <ref>,
together with the same isochrone (purple line) of Fig. <ref>, employing
the same distance and reddening, and the
extinction law from the NASA/IPAC infrared science archive[<https://irsa.ipac.caltech.edu/applications/DUST/>] for the Sloan filters.
§ THE BROADENING OF THE LOWER MS
To investigate in detail the origin of the broadening of the lower MS we followed the same
technique described in <cit.>. We considered stars in the
Gaia CMD with G between 15.2 and 16.6
(we have a total of 132 stars in this magnitude range) and combined the photometry in the Gaia filters with the corresponding u and i magnitudes to build a differential colour-colour diagram, as summarised below.
We have defined an MS blue fiducial in both the G-(G_BP-G_RP) and G-(u-i) diagrams as described in <cit.> and
for each observed star we have computed, in both G-(G_BP-G_RP) and G-(u-i) diagrams,
the difference between its colour and the colour of the corresponding blue fiducial at the star G magnitude.
These quantities are denoted as Δ_GBR and Δ_Gui respectively (see Fig. <ref>).
We then plotted these colour differences in the
Δ_GBR-Δ_Gui diagram shown
Fig. <ref>. As for the case of M37,
the lower MS stars are distributed along a well-defined sequence
which starts around the coordinates (0,0) –corresponding to the stars lying on the blue fiducials– and stretches towards
increasingly positive values (denoting stars increasingly redder than the fiducials) with the quantity Δ_Gui increasing faster than Δ_GBR.
These colour spreads cannot arise from (underestimated) random photometric
errors only, because in this case they would be distributed without a correlation
between Δ_Gui and Δ_GBR.
In the same figure, together with the data, we show
the reddening vector, calculated using the extinction laws for the Gaia
and Sloan filters referenced above.
We also plot the vector corresponding to the predicted position
of binaries with varying mass ratio q (blue) and the range of colours spanned by
isochrones with increasing [Fe/H] and increasing Y (green and magenta).
These vector have been calculated as described
by <cit.> for M37,
using as reference the isochrone in Fig. <ref>, and
the corresponding values of d and E(B-V) (see Sect. <ref>).
In this figure, we display the effect of binaries as a two-slope
sequence, because it is a better representation of the trend predicted by synthetic stellar populations, compared to a single slope as shown in <cit.>.
The figure shows that, in the case of M38, the distribution of the stars' colours in this diagram follows a trend consistent with a combination of
differential reddening across the face of the cluster and the presence of unresolved binaries with varying q. There is no need to invoke
the presence of a range of [Fe/H] or Y among the clusters' stars.
This is at odds with the case of M37, where binaries and differential reddening produced too shallow slopes in this diagram, compared to the observations (see the Appendix for a comparison of the CMDs and Δ_GBR-Δ_Gui diagrams of M37 and M38).
Figure <ref> shows a synthetic sample of stars –computed as in <cit.>, using isochrone, distance, and reference reddening previously discussed– compared to observations in the Δ_GBR-Δ_Gui diagram. The purpose of this comparison is just to see how binaries and differential reddening only can account qualitatively for the observed distribution of lower-MS stars in
this diagram.
The full synthetic sample of 50 000 objects includes observational errors in both the Gaia
and Sloan magnitudes, and contains a 70 % fraction of unresolved binaries with mass-ratios q distributed as
f(q) ∝ q^-0.6 following <cit.>. It is worth pointing out that in the case of assuming a flat probability distribution for q, the same results described below are obtained with a 10-15 % binary fraction.
We display here one random subset of the full sample, containing the same number of objects as the observations.
The figure also shows
along the horizontal and vertical axis a comparison of the number distributions of synthetic and observed stars as a function of the two quantities Δ_GBR and Δ_Gui,
respectively. When calculating these histograms we have considered the full sample of synthetic stars and rescaled the derived histograms to have the same total number of objects as observed.
The contribution of differential reddening has been accounted for by using a double Gaussian distribution;
only in this way, we are able to reproduce the clump of stars clearly visible in the Δ_GBR histogram at Δ_GBR ∼ 0.12.
The parameters of the distributions have been adjusted to roughly reproduce the observed trends of the number distributions in both Δ_GBR and Δ_Gui, because we could not determine
a reliable differential reddening map for M38 using the technique described by, e.g.,
<cit.>, given the relatively low number of objects. The
first Gaussian accounts for a random sample of ∼ 80 % of the synthetic
stars (both single and binary objects) and
is centred on E(B-V)=E(B-V)_ ref+0.04, with σ=0.033 mag, where E(B-V)_ ref=0.26. The second Gaussian distribution is centred on E(B-V)=E(B-V)_ ref+0.15, with σ=0.01 mag, and accounts for the remaining objects in the synthetic sample.
It is remarkable how this synthetic sample, which includes just unresolved binaries and the effect of differential reddening, follows nicely the observed trend in this diagram. Also, the observed number distribution across the diagram can be followed quite well by using two simple Gaussian
E(B-V) distributions and the power-law q distribution determined
by <cit.>.
This shows that there is no need to invoke a chemical abundance spread to explain the width of the lower MS in this cluster.
We have then repeated the analysis previously described considering
this time stars in the brighter G magnitude range between 12.5 and 14, corresponding to single star masses between ∼ 1.5 and ∼ 2.2 M_⊙.
Using the same binary fraction, q and E(B-V) distributions of the previous comparison, we have found the same agreement of the number distributions of synthetic and observed stars across the Δ_GBR-Δ_Gui diagram as in Fig.,<ref>.
The same comparison could not be performed for objects in the TO region of the CMD, because of the small sample of cluster stars in this magnitude range (see Sect. <ref>).
§ SUMMARY AND CONCLUSIONS
We have employed the accurate Gaia DR3 photometry and astrometry
of the poorly studied open cluster M38 to select bona fide members and determine the cluster distance and mean proper motion.
The Gaia CMD does not show an obvious extended TO despite the
cluster being ∼ 300 Myr old, but the number of stars in the TO region is too small to investigate quantitatively this matter.
The unevolved MS is broader than expected from photometric errors only
and to determine the origin of this broadening we have applied
the same technique developed to study the open cluster M37 <cit.>, making use of auxiliary photometry in the Sloan system to build a differential colour-colour diagram of the lower MS from combinations of Gaia and Sloan magnitudes.
We employed synthetic stellar populations to reproduce the observed trend of M38 stars in this diagram, and found that
the observed MS colour spread can be explained simply by the combined effect
of differential reddening and unresolved binaries.
There is no need to include a spread of initial chemical composition (either metals or helium) as instead necessary to explain the same differential colour-colour diagram for the lower MS of M37.
Despite having similar total masses
<cit.>
and metallicities different on average by no more than at most a
factor 2-3, the open clusters M38 and M37 seem to host stellar populations with a clear difference: single vs multiple chemical
compositions.
The origin of this difference is unknown and we do not know as well whether the chemical abundance spread found photometrically in M37 is
a feature common to many more open clusters, and if there is any connection with the extended TO phenomenon.
Further photometric investigations like ours, as well as accurate differential spectroscopic analyses on a large sample
of open clusters are necessary to shed light on this phenomenon, and its implications for cluster formation and the use of open clusters and chemical tagging to study the formation and evolution of the Galactic disk.
§ ACKNOWLEDGEMENTS
Based on observations collected at the Schmidt telescope (Asiago, Italy) of INAF.
This work has made use of data from the European Space Agency (ESA) mission
Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia
Data Processing and Analysis Consortium (DPAC,
<https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement.
This research or product makes use of public auxiliary data provided by ESA/Gaia/DPAC/CU5
and prepared by Carine Babusiaux.
MG and LRB acknowledge support by MIUR under PRIN program #2017Z2HSMF and PRIN INAF 2019 (PI Bedin).
MS acknowledges support from The Science and
Technology Facilities Council Consolidated Grant ST/V00087X/1.
SC acknowledges financial support from Premiale INAF MITiC, from INFN (Iniziativa specifica TAsP), and from
PLATO ASI-INAF agreement n.2015-019-R.1-2018.
§ DATA AVAILABILITY
The isochrones employed in this study can be retrieved at <http://basti-iac.oa-abruzzo.inaf.it>,
but for the helium enhanced isochrones, that are available upon request.
The calibrated photometry and astrometry employed in this article is released
as supplementary on-line material, and available at
<https://web.oapd.inaf.it/bedin/files/PAPERs_eMATERIALs/M38_ugiSchmidt/>,
along with an atlas.
The same catalogue also conveniently lists the Gaia DR3 photometry, astrometry and
source ID, when available.
mnras
§ COMPARISON OF M37 AND M38 DIAGRAMS
We present here a comparison of the Gaia CMD and the
Δ_GBR-Δ_Gui diagrams of M37 and M38. The top panels
of Fig. <ref> show the CMDs of our M37 (left) and M38 (right) bona-fide members, highlighting in a darker shade of grey
the sources employed in the analysis of the width of the MS. The bottom panels display the Δ_GBR-Δ_Gui
diagrams of both cluster's lower MS. The points are clearly distributed along different
slopes in the two diagrams; in particular, we also notice the behaviour of high-q binaries
(Δ_GBR≳ 0.1) in the case of M37, that are distributed along a steeper
line, while M38 stars in the same region follow a shallower direction.
|
http://arxiv.org/abs/2306.03491v1
|
20230606081616
|
SciCap+: A Knowledge Augmented Dataset to Study the Challenges of Scientific Figure Captioning
|
[
"Zhishen Yang",
"Raj Dabre",
"Hideki Tanaka",
"Naoaki Okazaki"
] |
cs.CV
|
[
"cs.CV",
"cs.CL"
] |
LegoNet: Alternating Model Blocks for Medical Image Segmentation
Ikboljon Sobirov1,2^*Cheng Xie2^* Muhammad Siddique2Parijat Patel2 Kenneth Chan2Thomas Halborg2 Christos Kotanidis2Zarqiash Fatima3 Henry West2Keith Channon2 Stefan Neubauer2Charalambos Antoniades2 Mohammad Yaqub1
===========================================================================================================================================================================================================================
In scholarly documents, figures provide a straightforward way of communicating scientific findings to readers. Automating figure caption generation helps move model understandings of scientific documents beyond text and will help authors write informative captions that facilitate communicating scientific findings. Unlike previous studies, we reframe scientific figure captioning as a knowledge-augmented image captioning task that models need to utilize knowledge embedded across modalities for caption generation. To this end, we extended the large-scale SciCap dataset <cit.> to SciCap+ which includes mention-paragraphs (paragraphs mentioning figures) and OCR tokens. Then, we conduct experiments with the M4C-Captioner (a multimodal transformer-based model with a pointer network) as a baseline for our study. Our results indicate that mention-paragraphs serves as additional context knowledge, which significantly boosts the automatic standard image caption evaluation scores compared to the figure-only baselines. Human evaluations further reveal the challenges of generating figure captions that are informative to readers. The code and SciCap+ dataset will be publicly available:[<https://github.com/ZhishenYang/scientific_figure_captioning_dataset>]
§ INTRODUCTION
Scholarly documents are the primary source for sharing scientific knowledge. These documents are available in various formats, such as journal articles, book chapters, and conference proceedings. A significant portion of these documents is text and together with figures and tables, they help communicate knowledge to readers. Using figures provides visual representations of complex information that facilitate the sharing of scientific findings with readers efficiently and straightforwardly. The standard practice for scientific writing is to write a caption for each figure, accompanied by paragraphs with detailed explanations. Figures and captions should be standalone, and readers should be able to understand the figures without referring to the main text. Helping authors write appropriate and informative captions for figures will improve the quality of scientific documents, thereby enhancing the speed and quality of scientific communication. In this study, we focus on automating the generation of captions for figures in scientific papers.
Scientific figure captioning is a variant of the image captioning task. However, with the same goal of generating a caption, it has two unique challenges: 1. Figures are not natural images: In contrast to natural images, visual objects are texts and data points in scientific figures. 2. The captions of the figures should explain: Instead of simply identifying objects and texts in the figures, the caption should contain an analysis that the authors intend to present and highlight findings.
A previous study <cit.>, SciCap, defines the scientific figure captioning task as a figure-to-caption task: A model generates captions only referring to figures. Their work reported relatively lower scores as measured by automatic evaluation metrics, indicating that there is considerable room for improvement. Intuitively, writing appropriate figure captions without sufficient background knowledge is difficult, since even humans will struggle to interpret a figure and write a caption unless some background knowledge is available. On the basis of this observation, we think that generating appropriate captions is infeasible without adding context knowledge to the caption generation model. This context comes in two forms: background knowledge from the running text and the OCR tokens in the figure, both of which should help reduce the burden on the captioning model. To this end, we augment the existing large-scale scientific figure captioning dataset: SciCap with mention-paragraphs and OCR tokens and call the resultant dataset as SciCap+. We then pose scientific figure captioning as a multimodal summarization task and use the M4C captioner model <cit.>(a model that utilizes multimodal knowledge to generate captions) as a baseline to study the scientific figure captioning task. The experimental result of automatic evaluation demonstrates that using knowledge embedded in different modalities, especially in the form of mention-paragraphs and OCR tokens, significantly boosts performance.
In addition to experiments using automatic evaluation metrics, we also performed human generation and evaluation tasks in order to establish the inherent difficulty of scientific figure captioning. The results of the human evaluation reveal three findings: 1. Multimodal knowledge helps models outperform humans in caption generation tasks. 2. Model-generated captions are almost as informative as ground-truth captions: Human evaluators do not prefer either type of caption. 3. Even referring to mention-paragraphs, it is still challenging for humans to write captions that are close to ground truth. To the best of our knowledge, we are the first to pose scientific figure captioning as a multimodel summarization task and show that mention-paragraphs and OCR tokens as context substantially enhance the quality of generated captions.
§ PRELIMINARY STUDY
In the traditional image captioning task, captioning an image aims at describing the appearances or natures of recognized objects and illustrating the relationships between recognized objects. Unlike the usual image captioning tasks, figures do not contain visual scenes. Instead, the captions provide interpretations of data presented in figures to highlight scientific findings that authors want to present to readers. With this unique characteristic, without referring to mention-paragraphs, which usually refer to the figure, it is extremely challenging for a human to have proper interpretations of figures. This is because they may lack background knowledge of the domain or context of the figure. As figure <ref> shows, by only looking at the figure, we do not know what "comm.(KB)" stands for; therefore lacking the knowledge to write informative captions is challenging. However, the mention-paragraph contains "communication cost" and this is also present in the caption, indicating that such background knowledge should help in writing accurate captions.
§ PROBLEM FORMULATION
The previous study <cit.> defined this task as an image captioning task as: Given a figure I, the model generates a caption C=[c_0,c_1,...,c_N]. However, we reframe the scientific figure captioning task as a knowledge-augmented image captioning task requiring knowledge extracted from text and vision modalities. For a figure, we define a paragraph that mentions it (mention-paragraph) and text within the figure, extracted via OCR, as text modalities. The figure itself and visual appearances of OCR texts are visual modalities. Given a scientific figure I and knowledge extracted from text and vision modality: K_text and K_vision, we define figure caption generation task as P(C|I,K_text, K_vision)
§ SCICAP+ DATASET
SciCap is a large-scale figure-caption dataset comprising graph plots extracted from 10 years of collections of arXiv computer science papers. We used around 414k figures from SciCap and augment each figure with its mention-paragraphs and OCR tokens with metadata. This section details the data set creation and data augmentation processes. Figure <ref> shows the overall workflow behind the creation of SciCap+.
§.§ Mention-paragraph Extraction
We first obtained papers in PDF format from Kaggle arXiv dastaset [<https://www.kaggle.com/datasets/Cornell-University/arxiv>]. The reason for using PDFs is that not all papers have source files and some are complicated to parse.
After obtaining PDFs, we used PDFFigures 2.0 <cit.> [<https://github.com/allenai/pdffigures2>] to extract the body text of each paper. PDFFigure 2.0 is a tool that extracts figures, captions, tables, and text from scholarly PDFs in computer science. In scholarly documents, authors label figures with numbers (e.g. Figure 1. Fig. 1). For a figure, we used its figure number in a regular expression to locate a paragraph that mentions it.
§.§ OCR Extraction
The SciCap dataset also provides texts extracted from figures as metadata, but does not provide location information for each text. To include location information for each text in a figure, we used Google Vision OCR API to extract text tokens from each figure with its coordinates of bounding boxes.
§.§ Data Statistics
The splitting of the SciCap dataset is at the figure level. Therefore, figures from the same paper may appear in different splits. This will lead to unfair evaluation, since the information of one figure in one split may coincidentally overlap with the information of another figure. We thus re-split figures at the document level to eliminate this overlapping problem.
<cit.> show that text normalization and figure filtering do not improve model performance. Hence, we keep original captions and all figures (with/without sub-figures) in the SciCap+ dataset. For a figure, we kept only the first paragraph that mentions it in the body text. Table<ref> shows statistics of the SciCap+ dataset. In all three splits, around 90% of the captions are less than 66 words. All figures are graph plots.
§.§ Dataset Quality Evaluation
Before conducting experiments, we conducted human evaluation of SciCap+ where we checked the mention-paragraphs and OCR tokens extraction quality. The aim was to establish whether the mention-paragraphs and OCR tokens were extracted correctly and relevant to the figure and its caption. To this end, we randomly selected 200 figures from the training set and for each figure, we asked two human evaluators to give scores of 1-5 (1 represents no relevance and 5 is highly relevant) for relevance between a caption of a figure and its mention-paragraphs and OCR tokens.
Compared to natural image captioning, human evaluation tasks for the figure captioning domain requires expert knowledge. We recruited two colleagues to carry out this evaluation task. Both of them have Ph.D. degrees in computer science and work as researchers. Their experience implies that they have adequate experience writing figure captions.
Figure <ref> shows the distributions of the relevance scores. We can observe that two evaluators gave most of the figures (evaluator 1: 64% and evaluator 2: 79.5%) with relevance scores greater than 3 and a cohen kappa score of 0.28. This evaluation result indicates that the mention-paragraphs and OCR tokens have a satisfactory extraction quality and that the annotators considered most of them as relevant to the figure and its caption. However, the two annotators seem to have a relatively lower agreement (0.28) regarding which figures and captions are relevant to their mention-paragraphs and OCR tokens. We attribute this to the fact that evaluations of figure captions are highly subjective.
§ EXPERIMENTS
We conduct experiments using SciCap+ to empirically prove that scientific figure captioning is inherently a knowledge-augmented task and benefits from knowledge coming from both text and vision modalities.
§.§ Figure Captioning Model
We used M4C-Captioner <cit.> as the baseline model to study the scientific figure captioning task. The M4C-Captioner is based on Multimodal Multi-Copy Mesh (M4C) <cit.> that jointly learns representations across input modalities. To solve the out-of-vocabulary problem during caption generation, it is equipped with a pointer network that picks up text from OCR tokens or a predefined fixed dictionary. In this work, 3 input features are used, figure, mention-paragraphs and OCR tokens fed to encoders, the output representations of which are fed to the M4C-Captioner.
§.§ Implementation and Training
Our implementation of M4C-Captioner is based on the MMF framework <cit.> and Pytorch. The implementation allows users to specify diverse pre-trained encoders for each modality, which can be fine-tuned or frozen during training. The M4C-captioner itself has D=768 hidden dimension size, K=4 transformer layers and 12 attention heads. We used sentencepiece <cit.> to obtain a dictionary of 32000 subwords built from both mention-paragraphs and OCR tokens. This is used as the M4C-captioner's vocabulary. We followed the BERT-BASE hyperparameter setting and trained from scratch.
Regarding the encoders that feed features to M4C-captioner, we used pre-trained Resnet-152 as the figure's vision encoder. For each figure, we applied a 2D adaptive average pooling over outputs from layer 5 to obtain a global visual feature vector with a dimension of 2048. Layers 2, 3 and 4 layers were fine-tuned during training. For mention-paragraph features, SciBERT <cit.> was used to encode[We only used the first 3 layers of SciBERT for lightweightness.] it into 758-dimensional feature vectors. The number of vectors equals the number of sub-word tokens in the mention-paragraph, which we limit to 192. The mention-paragraph encoder is also fine-tuned during training. Finally, for OCR tokens, we use both text and visual features. We selected FastText <cit.> as the word encoder and Pyramidal Histogram of Characters (PHOC) <cit.> as the character encoder. Regarding the visual feature encoder of OCR tokens, we first extracted Faster R-CNN fc6 features and then applied fc7 weights to it to obtain 2048-dimensional appearance features for bounding boxes of OCR tokens. The fc7 weights were fine-tuned during training. We kept a maximum of 95 OCR tokens per figure.
We trained a model on a GPU server with 8 Nvidia Tesla V100 GPUs. Training a model with a complete set of features took 13 hours. During training, we used a batch size of 128. We selected CIDEr as the evaluation metric. The evaluation interval is every 2000 iterations, we stop training if CIDEr score does not improve for 4 evaluation intervals. The optimizer is Adam with a learning rate of 0.001 and ϵ= 1.0E-08. We also used a multistep learning rate schedule with warmup iterations of 1000 and a warmup factor of 0.2. We kept the maximum number of decoding steps at the decoding time as 67.
For evaluation, we used five standard metrics for evaluating image captions: BLEU-4 <cit.>, METEOR <cit.>, ROUGE-L <cit.>, CIDEr <cit.> and SPICE <cit.>. Since figure captions contain scientific terms which can be seen as uncommon words, among all five metrics, we are particularly interested in CIDEr since it emphasizes them.
§ RESULTS
§.§ Main Result
The experimental results in table <ref> demonstrate that using the mention-paragraph and OCR tokens significantly improves scores on all five metrics compared to the figure-only baseline. The experimental results align with our hypothesis and preliminary study that scientific figure captioning is a knowledge-augmented image captioning task, OCR tokens and knowledge embedded in mention-paragraphs help in composing informative captions.
We established a baseline M4C-Captioner (Figure only) with figures as the only input modality to the M4C-Captioner model in row #1. This baseline is in the non-knowledge setting. Therefore, low scores in all metrics show that the model needs knowledge of other modalities. Using the mention only in row #2 shows that the mention certainly contains a lot of useful information, as evidenced by the increase in performance. When OCR features are added to the figure input in row #3, scores for all metrics have significant gains compared to the figure-only baseline, but are still weaker than when only mentions are used. This motivates the combination of mentions and OCR features and in row #4, compared to the figure-only baseline and figure-OCR-only baseline, the performance further improves. Perhaps the most interesting result is in row #5 where we only use the mentions and OCR features but not the figure and get the best performance, particularly for SPICE and CIDEr, albeit comparable to when the figure is included in row #4. All these results indicate that explicitly extracted multimodal knowledge helps to compose informative captions.
§.§ Ablation Studies
We first performed an ablation study on figures by removing visual feature vectors, the CIDEr score increases slightly, indicating that the visual feature is more like noise for the model. This is likely because the Resnet-152 visual encoder we used was not trained on figures.
We enriched the representations of the OCR features by adding text, visual, and spatial features. Ablation studies aim to reveal impacts of each OCR token feature. All comparisons are with row #4 even though row #5 gives slightly better scores. With OCR features completely removed in row #6, the CIDEr scores decrease by 5.3. Using only OCR spatial features in row #7, the CIDEr score dropped by 7.8. Removing OCR spatial features in row #8, the CIDEr scores dropped by 1.2. Upon removal of OCR visual features in row #9, the CIDEr score is close to removing spatial features.
The above ablation study indicates that the enriched OCR contributes to the informativeness of generated captions. Unlike OCR features, where appearance features are helpful to the model, removing visual features of figures increases CIDEr scores, further indicating that we need a specific vision encoder for figures to provide meaningful features.
§ HUMAN EVALUATION
Having established that knowledge helps a model perform figure captioning, we conducted some human evaluation activities to determine their subjective quality.
We conducted human caption generation and evaluation tasks. The human generation task is to examine whether humans can write better captions than models. The evaluation task is the appropriateness evaluation task, which consists of evaluating how appropriate the model-generated captions are versus ground-truth captions. Both tasks were performed by the same human subjects for the quality assessment of the data set.
§.§ Figure Caption Generation Task
The figure caption generation task is to generate captions under two conditions separately: 1. Figure-only: Human annotators write captions given only figures. This is to compare with captions generated by M4C-Captioner that only has access to figures and OCR features. 2. Figure-Mention: Human annotators write captions given both figures and their mention-paragraphs. We randomly selected 100 figures from the test set and to compare human-generated captions with captions generated by M4C-Captioner.
The table <ref> shows automatic evaluation results for human caption generation tasks. Given only figures (rows #1, 2), both annotators got low scores across all metrics, among those, annotator 2 led all metrics except SPICE. Since humans perform OCR naturally with their eyes we compare with M4C-captioner (Figure and OCR features). It has the best SPICE score, although it outperformed annotator 1 in 4 of 5 evaluation metrics, it achieved similar performance compared with annotator 2. This shows that without additional knowledge, humans aren't that better than machines.
However, given mention-paragraphs and figures (rows #4, 5), compared to the figure-only condition, both annotators got improved scores in BLEU-4, METEOR, ROUGE-L, and SPICE but lower scores in CIDEr. Previous studies have shown that CIDEr is more reliable as an evaluation metric for caption generation, and the lowered CIDEr scores indicates that humans are likely to struggle with additional knowledge. On the other hand, having access to full features, M4C-captioner gained a significantly better CIDEr score compared to human annotators. The automatic evaluation results of the human generation tasks show the steep difficulty in writing figure captions close to ground truth.
Even given mention-paragraphs, our annotator wrote captions with low scores across all standard image captioning evaluation metrics. We ascribe it as figure captions are highly subjective and require in-domain knowledge to write. Although our annotators are researchers, they cannot be professional in all knowledge existing in the computer science domain. Granted mention-paragraphs and OCR tokens as external knowledge sources, and with large-amount data training, the model can significantly outperform humans.
§.§ Appropriateness Evaluation
This task evaluates the appropriateness of model-generated and ground-truth captions. We used the same set of 100 figures as in the figure caption generation task, and placed ground-truth captions and model-generated captions in random order. Then, human evaluators rank each caption to give appropriateness scores (1-4) to each caption. The evaluation scale: 1. Inappropriate: a caption does not match the figure, is not a sentence, is wrong, or is misleading. 2. Not sure: It is impossible to judge appropriateness solely from the figure. 3. Possible: A possible candidate that is incomplete but not wrong. 4. Appropriate: An informative caption that interprets the figure well. Since an appropriate figure caption should stand alone and readers should understand the messages the figure wants to represent without referring to the body text, we do not show mention-paragraphs to evaluators.
Table <ref> shows the results of the evaluations. Two evaluators gave low average scores to both model-generated captions and ground-truth captions. In addition, evaluators only reached fair agreements on scoring (0.23-0.36). Using the mention and OCR features (row #2), gets the best human evaluation scores and this is in line with the corresponding score in Table <ref> where it also achieves the best CIDEr performance, indicating that human evaluation is reliable despite the fair agreements. The evaluation results indicate that the model-generated and ground-truth captions are not always informative to both evaluators, which reveals the need to improve caption writing quality and model performance. We observed that captions tend to be written without following specific rules, and this may contribute to lack of agreement. With low inter-rater agreements, we found how informative a figure caption is highly subjective and depends on in-domain background knowledge evaluators have.
§ RELATED WORK
Unlike natural image captioning, figure captioning has been scarcely studied in history. SciCap <cit.> is the most recent work on scientific figure captioning, they released a large-scale scientific figure captioning dataset that includes figures from academic papers in arXiv dataset. Before SciCap, FigCAP <cit.> <cit.> and FigureQA <cit.> are two figure captioning datasets, but their figures are synthesized. We decided to extend and study on SciCap dataset, since its figures are from real-world scientific papers. In this paper, we also have leveraged multimodal knowledge using pre-trained models.
Multimodal machine learning is to model knowledge across various modalities. The closest multimodal task to figure captioning is image captioning, a popular architecture is encode-decoder, where the decoder learns to generate captions conditioned on visual features extracted from the encoder. Recent works on integrating texts in natural images for visual question answering and image captioning tasks are based on transformer architecture augmented with a pointer network <cit.>. The transformer enriches representations by integrating knowledge from both text and visual modality. The pointer network dynamically selects words from the fixed dictionary or OCR tokens during generation.
Using knowledge embedded in pre-trained models is a common practice in solving multimodal tasks. In this work, we used SciBert <cit.>, a BERT model <cit.> that was pre-trained in scientific papers, to obtain informative representations for the texts extracted from computer science papers. Since terms that exist in the figures may be uncommon words, we also used FastText <cit.> to obtain word embeddings with subword information. For visual modality, we used Renst152 <cit.> and Faster R-CNN <cit.> used in extract features from images and bounding boxes.
§ CONCLUSION
In this paper, we study the challenges of the scientific figure captioning task. Extending from the previous study <cit.>, we reframe this task as a knowledge-augmented image captioning task, that is, a model needs to use knowledge extracted across modalities to generate captions. To this end, we released a new version of the SciCap dataset: SciCap+ by augmenting figures with their mention-paragraphs and OCR tokens. We used M4C-Captioner model as the baseline model to utilize knowledge across three modalities: mention-paragraphs, figures, and OCR tokens. The automatic evaluation experiments further reveal that using knowledge significantly improves evaluation metric scores. Compared with human-generated captions, we found models can generate better captions than humans regarding the automatic evaluation metrics. However, human evaluations demonstrated that writing scientific figure captioning is challenging even for humans, and the model-generated figure captions, despite their reasonable automatic evaluation quality, are still far from achieving a level appropriate for humans. The release of the SciCap+ dataset is to promote the further development of scientific figure captioning. For future work, we are interested in how to use multimodal pretraining strategies in this task.
§ ACKNOWLEDGMENT
These research results were partly obtained from the commissioned research (No. 225) by National Institute of Information and Communications Technology (NICT), Japan, and partly obtained from the first author's internship research under NICT.
|
http://arxiv.org/abs/2306.07549v1
|
20230613054138
|
Fixed-Budget Best-Arm Identification with Heterogeneous Reward Variances
|
[
"Anusha Lalitha",
"Kousha Kalantari",
"Yifei Ma",
"Anoop Deoras",
"Branislav Kveton"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
[
*
July 31, 2023
=================
We study the problem of best-arm identification (BAI) in the fixed-budget setting with heterogeneous reward variances. We propose two variance-adaptive BAI algorithms for this setting: for known reward variances and for unknown reward variances. The key idea in our algorithms is to adaptively allocate more budget to arms with higher reward variances. The main algorithmic novelty is in the design of , which allocates budget greedily based on overestimating unknown reward variances. We bound the probabilities of misidentifying best arms in both and . Our analyses rely on novel lower bounds on the number of arm pulls in BAI that do not require closed-form solutions to the budget allocation problem. One of our budget allocation problems is equivalent to the optimal experiment design with unknown variances and thus of a broad interest. We also evaluate our algorithms on synthetic and real-world problems. In most settings, and outperform all prior algorithms.
§ INTRODUCTION
The problem of best-arm identification (BAI) in the fixed-budget setting is a pure exploration bandit problem which can be briefly described as follows. An agent interacts with a stochastic multi-armed bandit with K arms and its goal is to identify the arm with the highest mean reward within a fixed budget n of arm pulls <cit.>. This problem arises naturally in many applications in practice, such as online advertising, recommender systems, and vaccine tests <cit.>. It is also common in applications where observations are costly, such as Bayesian optimization <cit.>. Another commonly studied setting is fixed-confidence BAI <cit.>. Here the goal is to identify the best arm within a prescribed confidence level while minimizing the budget. Some works also studied both settings <cit.>.
Our work can be motivated by the following example. Consider an A/B test where the goal is to identify a movie with the highest average user rating from a set of K movies. This problem can be formulated as BAI by treating the movies as arms and user ratings as stochastic rewards. Some movies get either unanimously good or bad ratings, and thus their ratings have a low variance. Others get a wide range of ratings, because they are rated highly by their target audience and poorly by others; and hence their ratings have a high variance. For this setting, we can design better BAI policies that take the variance into account. Specifically, movies with low-variance ratings can be exposed to fewer users in the A/B test than movies with high-variance ratings.
An analogous synthetic example is presented in <ref>. In this example, reward variances increase with mean arm rewards for a half of the arms, while the remaining arms have very low variances. The knowledge of the reward variances can be obviously used to reduce the number of pulls of arms with low-variance rewards. However, in practice, the reward variances are rarely known in advance, such as in our motivating A/B testing example, and this makes the design and analysis of variance-adaptive BAI algorithms challenging. We revisit these two examples in our empirical studies in <ref>.
We propose and analyze two variance-adaptive BAI algorithms: and . assumes that the reward variances are known and is a stepping stone for our fully-adaptive BAI algorithm , which estimates them. utilizes high-probability upper confidence bounds on the reward variances. Both algorithms are motivated by sequential halving () of <cit.>, a near-optimal solution for fixed-budget BAI with homogeneous reward variances.
Our main contributions are:
* We design two variance-adaptive algorithms for fixed-budget BAI: for known reward variances and for unknown reward variances. is only a third algorithm for this setting <cit.> and only a second that can be implemented as analyzed <cit.>. The key idea in is to solve a budget allocation problem with unknown reward variances by a greedy algorithm that overestimates them. This idea can be applied to other elimination algorithms in the cumulative regret setting <cit.> and is of independent interest to the field of optimal experiment design <cit.>.
* We prove upper bounds on the probability of misidentifying the best arm for both and . The analysis of extends that of <cit.> to heterogeneous variances. The analysis of relies on a novel lower bound on the number of pulls of an arm that scales linearly with its unknown reward variance. This permits an analysis of sequential halving without requiring a closed form for the number of pulls of each arm.
* We evaluate our methods empirically on Gaussian bandits and the MovieLens dataset <cit.>. In most settings, and outperform all prior algorithms.
The paper is organized as follows. In <ref>, we present the fixed-budget BAI problem. We present our algorithms in <ref> and analyze them in <ref>. The algorithms are empirically evaluated in <ref>. We review prior works in <ref> and conclude in <ref>.
§ SETTING
We use the following notation. Random variables are capitalized, except for Greek letters like μ. For any positive integer n, we define [n] = 1, …, n. The indicator function is denoted by ·. The i-th entry of vector v is v_i. If the vector is already indexed, such as v_j, we write v_j, i. The big O notation up to logarithmic factors is Õ.
We have a stochastic bandit with K arms and denote the set of arms by = [K]. When the arm is pulled, its reward is drawn i.i.d. from its reward distribution. The reward distribution of arm i ∈ is sub-Gaussian with mean μ_i and variance proxy σ^2_i. The best arm is the arm with the highest mean reward,
i_*
= _i ∈μ_i .
Without loss of generality, we make an assumption that the arms are ordered as μ_1 > μ_2 ≥…≥μ_K. Therefore, arm i_* = 1 is a unique best arm. The agent has a budget of n observations and the goal is to identify i_* as accurately as possible after pulling all arms n times. Specifically, let Î denote the arm returned by the agent after n pulls. Then our objective is to minimize the probability of misidentifying the best arm Î≠ i_*, which we also call a mistake probability. This setting is known as fixed-budget BAI <cit.>. When observations are costly, it is natural to limit them by a fixed budget n.
Another commonly studied setting is fixed-confidence BAI <cit.>. Here the agent is given an upper bound on the mistake probability δ as an input and the goal is to attain Î≠ i_*≤δ at minimum budget n. Some works also studied both the fixed-budget and fixed-confidence settings <cit.>.
§ ALGORITHMS
A near-optimal solution for fixed-budget BAI with homogeneous reward variances is sequential halving <cit.>. The key idea is to sequentially eliminate suboptimal arms in log_2 K stages. In each stage, all arms are pulled equally and the worst half of the arms are eliminated at the end of the stage. At the end of the last stage, only one arm Î remains and that arm is the estimated best arm.
The main algorithmic contribution of our work is that we generalize sequential halving of <cit.> to heterogeneous reward variances. All of our algorithms can be viewed as instances of a meta-algorithm (<ref>), which we describe in detail next. Its inputs are a budget n on the number of observations and base algorithm . The meta-algorithm has m stages (line 2) and the budget is divided equally across the stages, with a per-stage budget n_s = n / m (line 5). In stage s, all remaining arms _s are pulled according to (lines 6–8). At the end of stage s, the worst half of the remaining arms, as measured by their estimated mean rewards, is eliminated (lines 9–12). Here Y_s, t, i is the stochastic reward of arm i in round t of stage s, I_s, t∈_s is the pulled arm in round t of stage s, N_s, i is the number of pulls of arm i in stage s, and μ̂_s, i is its mean reward estimate from all observations in stage s.
The sequential halving of <cit.> is an instance of <ref> for =. The pseudocode of , which pulls all arms in stage s equally, is in <ref>. We call the resulting algorithm . This algorithm misidentifies the best arm with probability <cit.>
Î≠ 1≤ 3 log_2 K exp[- n/8 H_2 log_2 K] ,
where
H_2
= max_i ∈∖1i/Δ_i^2
is a complexity parameter and Δ_i = μ_1 - μ_i is the suboptimality gap of arm i. The bound in (<ref>) decreases as budget n increases and problem complexity H_2 decreases.
is near optimal only in the setting of homogeneous reward variances. In this work, we study the general setting where the reward variances of arms vary, potentially as extremely as in our motivating example in <ref>. In this example, would face arms with both low and high variances in each stage. A variance-adaptive could adapt its budget allocation in each stage to the reward variances and thus eliminate suboptimal arms more effectively.
§.§ Known Heterogeneous Reward Variances
We start with the setting of known reward variances. Let
σ_i^2
= Y_s, t, i
= (Y_s, t, i - μ_i)^2
be a known reward variance of arm i. Our proposed algorithm is an instance of <ref> for =. The pseudocode of is in <ref>. The key idea is to pull the arm with the highest variance of its mean reward estimate. The variance of the mean reward estimate of arm i in round t of stage s is σ_i^2 / N_s, t, i, where σ_i^2 is the reward variance of arm i and N_s, t, i is the number of pulls of arm i up to round t of stage s. We call the resulting algorithm .
Note that is an instance of . Specifically, when all σ_i = σ for some σ > 0, pulls all arms equally, as in . can be also viewed as pulling any arm i in stage s for
N_s, i≈σ_i^2/∑_j ∈_sσ_j^2 n_s
times. This is stated formally and proved below.
Fix stage s and let the ideal number of pulls of arm i ∈_s be
λ_s, i
= σ_i^2/∑_j ∈_sσ_j^2 n_s .
Let all λ_s, i be integers. Then pulls arm i in stage s exactly λ_s, i times.
First, suppose that pulls each arm i exactly λ_s, i times. Then the variances of all mean reward estimates at the end of stage s are identical, because
σ_i^2/N_s, i
= σ_i^2/λ_s, i
= σ_i^2/σ_i^2/∑_j ∈_sσ_j^2 n_s
= ∑_j ∈_sσ_j^2/n_s .
Now suppose that this is not true. This implies that there exists an over-pulled arm i ∈_s and an under-pulled arm k ∈_s such that
σ_i^2/N_s, i
< ∑_j ∈_sσ_j^2/n_s
< σ_k^2/N_s, k .
Since arm i ∈_s is over-pulled and λ_s, i is an integer, there must exist a round t ∈ [n_s] such that
σ_i^2/N_s, t, i
= σ_i^2/λ_s, i
= ∑_j ∈_sσ_j^2/n_s .
Let t be the last round where this equality holds, meaning that arm i is pulled in round t.
Now we combine the second inequality in (<ref>) with N_s, k≥ N_s, t, k, which holds by definition, and get
∑_j ∈_sσ_j^2/n_s
< σ_k^2/N_s, k≤σ_k^2/N_s, t, k .
The last two sets of inequalities lead to a contradiction. On one hand, we know that arm i is pulled in round t. On the other hand, we have σ_i^2 / N_s, t, i < σ_k^2 / N_s, t, k, which means that arm i cannot be pulled. This completes the proof.
<ref> says that each arm i ∈_s is pulled O(σ_i^2) times. Since the mean reward estimate of arm i at the end of stage s has variance σ_i^2 / N_s, i, the variances of all estimates at the end of stage s are identical, (∑_i ∈_sσ_i^2) / n_s. This relates our problem to the G-optimal design <cit.>. Specifically, the G-optimal design for independent experiments i ∈_s is an allocation of observations (N_s, i)_i ∈_s such that ∑_i ∈_s N_s, i = n_s and the maximum variance
max_i ∈_sσ_i^2/N_s, i
is minimized. This happens precisely when all σ_i^2 / N_s, i are identical, when N_s, i = λ_s, i for λ_s, i in <ref>.
§.§ Unknown Heterogeneous Reward Variances
Our second proposal is an algorithm for unknown reward variances. One natural idea, which is expected to be practical but hard to analyze, is to replace σ_i^2 in with its empirical estimate from the past t - 1 rounds in stage s,
σ̂_s, t, i^2
= 1/N_s, t, i - 1∑_ℓ = 1^t - 1I_s, ℓ = i
(Y_s, ℓ, i - μ̂_s, t, i)^2 ,
where
μ̂_s, t, i
= 1/N_s, t, i∑_ℓ = 1^t - 1I_s, ℓ = i Y_s, ℓ, i
is the empirical mean reward of arm i in round t of stage s. This design would be hard to analyze because σ̂_s, t, i can underestimate σ_i, and thus is not an optimistic estimate.
The key idea in our solution is to act optimistically using an upper confidence bound (UCB) on the reward variance. To derive it, we make an assumption that the reward noise is Gaussian. Specifically, the reward of arm i in round t of stage s is distributed as Y_s, t, i∼(μ_i, σ_i^2). This allows us to derive the following upper and lower bounds on the unkown variance σ_i^2.
Fix stage s, round t ∈ [n_s], arm i ∈_s, and failure probability δ∈ (0, 1). Let
N
= N_s, t, i - 1
and suppose that N > 4 log(1 / δ). Then
σ_i^2
≥σ̂_s, t, i^2/1 - 2 √(log(1 / δ)/N)≤δ
holds with probability at least 1 - δ. Analogously,
σ̂_s, t, i^2
≥σ_i^2 [1 + 2 √(log(1 / δ)/N) +
2 log(1 / δ)/N]≤δ
holds with probability at least 1 - δ.
The first claim is proved as follows. By Cochran's theorem, we have that σ̂_s, t, i^2 N / σ_i^2 is a χ^2 random variable with N degrees of freedom. Its concentration was analyzed in <cit.>. More specifically, by (4.4) in <cit.>, an immediate corollary of their Lemma 1, we have
N - σ̂_s, t, i^2 N/σ_i^2≥ 2 √(N log(1 / δ))≤δ .
Now we divide both sides in the probability by N, multiply by σ_i^2, and rearrange the formula as
σ_i^2 (1 - 2 √(log(1 / δ) / N))
≥σ̂_s, t, i^2≤δ .
When 1 - 2 √(log(1 / δ) / N) > 0, we can divide both sides by it and get the first claim in <ref>.
The second claim is proved analogously. Specifically, by (4.3) in <cit.>, an immediate corollary of their Lemma 1, we have
σ̂_s, t, i^2 N/σ_i^2 - N
≥ 2 √(N log(1 / δ)) + 2 log(1 / δ)≤δ .
Now we divide both sides in the probability by N, multiply by σ_i^2, and obtain the second claim in <ref>. This concludes the proof.
By <ref>, when N_s, t, i > 4 log(1 / δ) + 1,
U_s, t, i
= σ̂_s, t, i^2/1 - 2 √(log(1 / δ)/N_s, t, i - 1)
is a high-probability upper bound on the reward variance of arm i in round t of stage s, which holds with probability at least 1 - δ. This bound decreases as the number of observations N_s, t, i increases and confidence δ decreases. To apply the bound across multiple stages, rounds, and arms, we use a union bound.
The bound in (<ref>) leads to our algorithm that overestimates the variance. The algorithm is an instance of <ref> for =. The pseudocode of is in <ref>. To guarantee N_s, t, i > 4 log(1 / δ) + 1, we pull all arms _s in any stage s for 4 log(1 / δ) + 1 times initially. We call the resulting algorithm .
Note that can be viewed as a variant of where U_s, t, i replaces σ_i^2. Therefore, it can also be viewed as solving the G-optimal design in (<ref>) without knowing reward variances σ_i^2; and is of a broader interest to the optimal experiment design community <cit.>. We also note that the assumption of Gaussian noise in the design of is limiting. To address this issue, we experiment with non-Gaussian noise in <ref>.
§ ANALYSIS
This section comprises three analyses. In <ref>, we bound the probability that , an algorithm that knows reward variances, misidentifies the best arm. In <ref>, we provide an alternative analysis that does not rely on the closed form in (<ref>). Finally, in <ref>, we bound the probability that , an algorithm that learns reward variances, misidentifies the best arm.
All analyses are under the assumption of Gaussian reward noise. Specifically, the reward of arm i in round t of stage s is distributed as Y_s, t, i∼(μ_i, σ_i^2).
§.§ Error Bound of
We start with analyzing , which is a stepping stone for analyzing . To simplify the proof, we assume that both m and n_s are integers. We also assume that all budget allocations have integral solutions in <ref>.
misidentifies the best arm with probability
Î≠ 1≤ 2 log_2 K exp[- n Δ_min^2/4 log_2 K ∑_j ∈σ_j^2] ,
where Δ_min = μ_1 - μ_2 is the minimum gap.
The claim is proved in <ref>. We follow the outline in <cit.>. The novelty is in extending the proof to heterogeneous reward variances. This requires a non-uniform budget allocation, where arms with higher reward variances are pulled more (<ref>).
The bound in <ref> depends on all quantities as expected. It decreases as budget n and minimum gap Δ_min increase, and the number of arms K and variances σ_j^2 decrease. reduces to in <cit.> when σ_i^2 = 1 / 4 for all arms i ∈. The bounds of and become comparable when we apply H_2 ≤ K / Δ_min^2 in (<ref>) and note that ∑_j ∈σ_j^2 = K / 4 in <ref>. The extra factor of 8 in the exponent of (<ref>) is due to a different proof, which yields a finer dependence on gaps.
§.§ Alternative Error Bound of
Now we analyze differently. The resulting bound is weaker than that in <ref> but its proof can be easily extended to .
misidentifies the best arm with probability
Î≠ 1≤ 2 log_2 K exp[- (n - K log K) Δ_min^2/4 σ_max^2 K log_2 K] ,
where Δ_min = μ_1 - μ_2 is the minimum gap and σ_max^2 = max_i ∈σ_i^2 is the maximum reward variance.
The claim is proved in <ref>. The key idea in the proof is to derive a lower bound on the number of pulls of any arm i in stage s, instead of using the closed form of N_s, i in (<ref>). The lower bound is
N_s, i≥σ_i^2/σ_max^2(n_s/_s - 1) .
An important property of the bound is that it is Ω(σ_i^2 n_s), similarly to N_s, i in (<ref>). Therefore, the rest of the proof is similar to that of <ref>.
As in <ref>, the bound in <ref> depends on all quantities as expected. It decreases as budget n and minimum gap Δ_min increase, and the number of arms K and maximum variance σ_max^2 decrease. The bound approaches that in <ref> when all reward variances are identical.
§.§ Error Bound of
Now we analyze .
Suppose that δ < 1 / (K n) and
n ≥
K log_2 K (4 log(K n / δ) + 1) .
Then misidentifies the best arm with probability
Î≠ 1≤ 2 log_2 K exp[- α(n - K log K) Δ_min^2/4 σ_max^2 K log_2 K] ,
where Δ_min and σ_max^2 are defined in <ref>, and
α
= 1 - 2 √(log(K n / δ)/n / K - 2)/1 + 2 √(log(K n / δ)/n / K - 2) +
2 log(K n / δ)/n / K - 2 .
The claim is proved in <ref>. The key idea in the proof is to derive a lower bound on the number of pulls of any arm i in stage s, similarly to that in <ref>. The lower bound is
N_s, i≥σ_i^2/σ_max^2α(_s, n_s, δ)
(n_s/_s - 1)
and holds with probability at least 1 - δ. Since the bound is Ω(σ_i^2 n_s), as in the proof of <ref>, the rest of the proof is similar. The main difference from <ref> is in factor α(_s, n_s, δ), which converges to 1 as n_s →∞.
The bound in <ref> depends on all quantities as expected. It decreases as budget n and minimum gap Δ_min increase, and the number of arms K and maximum variance σ_max^2 decrease. As n →∞, we get α→ 1 and the bound converges to that in <ref>.
§ EXPERIMENTS
In this section, we empirically evaluate our proposed algorithms, and , and compare them to algorithms from prior works. We choose the following baselines: uniform allocation (), sequential halving () <cit.>, gap-based exploration () <cit.>, gap-based exploration with variance () <cit.>, and variance-based rejects () <cit.>.
allocates equal budget to all arms and was originally proposed for homogeneous reward variances. Neither nor can adapt to heterogenuous reward variances. , and are variance-adaptive BAI methods from related works (<ref>). In , we use H from Theorem 1 of <cit.>. In , we use H from Theorem 2 of <cit.>. Both and assume bounded reward distributions with support [0, b]. We choose b = max_i ∈μ_i + σ_i √(log n), since this is a high-probability upper bound on the absolute value of n independent observations from (μ_i, σ_i^2). In , we set δ = 0.05, and thus our upper bounds on reward variances hold with probability 0.95. In , γ = 1.96, which means that the mean arm rewards lie between their upper and lower bounds with probability 0.95. <cit.> showed that performs well with Gaussian noise when γ≈ 2. All reported results are averaged over 5 000 runs.
and have O(exp[- c n / H]) error bounds on the probability of misidentifying the best arm, where n is the budget, H is the complexity parameter, and c = 1 / 144 for and c = 1 / 512 for . Our error bounds are O(exp[- c' n / H']), where H' is a comparable complexity parameter and c' = 1 / (4 log_2 K). Even for moderate K, c ≪ c'. Therefore, when and are implemented as analyzed, they provide stronger guarantees on identifying the best arm than and . To make the algorithms comparable, we set H of and to H c / c', by increasing their confidence widths. Since H is an input to both and , note that they have an advantage over our algorithms that do not require it.
§.§ Synthetic Experiments
Our first experiment is on a Gaussian bandit with K arms. The mean reward of arm i is μ_i = 1 - √((i - 1) / K). We choose this setting because is known to perform well in it. Specifically, note that the complexity parameter H_2 in (<ref>) is minimized when i / Δ_i^2 are equal for all i ∈∖1. For our μ_i, Δ_i^2 = (i - 1) / K ≈ i / K and thus i / Δ_i^2 ≈ K. We set the reward variance as σ^2_i = 0.9 μ^2_i + 0.1 when arm i is even and σ^2_i = 0.1 when arm i is odd. We additionally perturb μ_i and σ_i^2 with additive (0, 0.05^2) and multiplicative Unif(0.5, 1.5) noise, respectively. We visualize the mean rewards μ_i and the corresponding variances σ^2_i, for K = 64 arms, in <ref>. The variances are chosen so that every stage of sequential halving involves both high-variance and low-variance arms. Therefore, an algorithm that adapts its budget allocation to the reward variances of the remaining arms eliminates the best arm with a lower probability than the algorithm that does not.
In <ref>, we report the probability of misidentifying the best arm among K = 64 arms (<ref>) as budget n increases. As expected, the naive algorithm performs the worst. and perform only slightly better. When the algorithms have comparable error guarantees to and , their confidence intervals are too wide to be practical. performs surprisingly well. As observed by <cit.> and confirmed by <cit.>, is a superior algorithm in the fixed-budget setting because it aggressively eliminates a half of the remaining arms in each stage. Therefore, it outperforms and . We note that outperforms all algorithms for all budgets n. For smaller budgets, outperforms . However, as the budget n increases, outperforms ; and without any additional information about the problem instance approaches the performance of , which knows the reward variances. This shows that our variance upper bounds improve quickly with larger budgets, as is expected based on the algebraic form in (<ref>).
In the next experiment, we take same Gaussian bandit as in <ref>. The budget is fixed at n = 5 000 and we vary the number of arms K from 32 to 64. In <ref>, we show the probability of misidentifying the best arm as the number of arms K increases. We observe two major trends. First, the relative order of the algorithms, as measured by their probability of a mistake, is similar to <ref>. Second, all algorithms get worse as the number of arms K increases because the problem instance becomes harder. This experiment shows that and can perform well for a wide range of K, they have the lowest probabilities of a mistake for all K. While the other algorithms perform well at K = 32, their probability of a mistake is around 0.05 or below; they perform poorly at K = 64, their probability of a mistake is above 0.1.
§.§ MovieLens Experiments
Our next experiment is motivated by the A/B testing problem in <ref>. The objective is to identify the movie with the highest mean rating from a pool of K movies, where movies are arms and their ratings are rewards. The movies, users, and ratings are simulated using the MovieLens 1M dataset <cit.>. This dataset contains one million ratings given by 6 040 users to 3 952 movies. We complete the missing ratings using low-rank matrix factorization with rank 5, which is done using alternating least squares <cit.>. The result is a 6 040 × 3 952 matrix M, where M_i, j is the estimated rating given by user i to movie j.
This experiment is averaged over 5 000 runs. In each run, we randomly choose new movies according to the following procedure. For all arms i ∈, we generate mean μ̃_i and variance σ̃_i^2 as described in <ref>. Then, for each i, we find the closest movie in the MovieLens dataset with mean μ_i and variance σ_i^2, the movie that minimizes the distance (μ_i - μ̃_i)^2 + (σ^2_i - σ̃_i^2)^2. The means and variances of movie ratings from two runs are shown in <ref>. As in <ref>, the movies are selected so that sequential elimination with halving is expected to perform well. The variance of movie ratings in <ref> is intrinsic to our domain: movies are often made for specific audiences and thus can have a huge variance in their ratings. For instance, a child may not like a horror movie, while a horror enthusiast would enjoy it. Because of this, an algorithm that adapts its budget allocation to the rating variances of the remaining movies can perform better. The last notable difference from <ref> is that movie ratings are realistic. In particular, when arm i is pulled, we choose a random user j and return M_j, i as its stochastic reward. Therefore, this experiment showcases the robustness of our algorithms beyond Gaussian noise.
In <ref>, we report the probability of misidentifying the best movie from K = 64 as budget n increases. and perform the best for most budgets, although the reward distributions are not Gaussian. The relative performance of the algorithms is similar to <ref>: is the worst, and and improve upon it. The only exception is : it performs poorly for smaller budgets, and on par with and for larger budgets.
We increase the number of movies next. In <ref>, we report the probability of misidentifying the best movie from K = 128 as budget n increases. The trends are similar to K = 64, except that performs poorly for all budgets. This is because has K stages and eliminates one arm per stage even when the number of observations is small. In comparison, our algorithms have log_2 K stages.
§ RELATED WORK
Best-arm identification has been studied extensively in both fixed-budget <cit.> and fixed-confidence <cit.> settings. The two closest prior works are <cit.> and <cit.>, both of which studied fixed-budget BAI with heterogeneous reward variances. All other works on BAI with heterogeneous reward variances are in the fixed-confidence setting <cit.>.
The first work on variance-adaptive BAI was in the fixed-budget setting <cit.>. This paper proposed algorithm and showed that its probability of mistake decreases exponentially with budget n. Our error bounds are comparable to <cit.>. The main shortcoming of the analyses in <cit.> is that they assume that the complexity parameter is known and used by . Since the complexity parameter depends on unknown gaps and reward variances, it is typically unknown in practice. To address this issue, <cit.> introduced an adaptive variant of , , where the complexity parameter is estimated. This algorithm does not come with any guarantee.
The only other work that studied variance-adaptive fixed-budget BAI is <cit.>. This paper proposed and analyzed a variant of successive rejects algorithm <cit.>. Since of <cit.> has a comparable error bound to successive rejects of <cit.>, our variance-adaptive sequential halving algorithms have comparable error bounds to variance-adaptive successive rejects of <cit.>. Roughly speaking, all bounds can be stated as exp[- n / H], where H is a complexity parameter that depends on the number of arms K, their variances, and their gaps.
We propose variance-adaptive sequential halving for fixed-budget BAI. Our algorithms have state-of-the-art performance in our experiments (<ref>). They are conceptually simpler than prior works <cit.> and can be implemented as analyzed, unlike <cit.>.
§ CONCLUSIONS
We study best-arm identification in the fixed-budget setting where the reward variances vary across the arms. We propose two variance-adaptive elimination algorithms for this problem: for known reward variances and for unknown reward variances. Both algorithms proceed in stages and pull arms with higher reward variances more often than those with lower variances. While the design and analysis of are of interest, they are a stepping stone for , which adapts to unknown reward variances. The novelty in is in solving an optimal design problem with unknown observation variances. Its analysis relies on a novel lower bound on the number of arm pulls in BAI that does not require closed-form solutions to the budget allocation problem. Our numerical simulations show that and are not only theoretically sound, but also competitive with state-of-the-art baselines.
Our work leaves open several questions of interest. First, the design of is for Gaussian reward noise. The reason for this choice is that our initial experiments showed quick concentration and also robustness to noise misspecification. Concentration of general random variables with unknown variances can be analyzed using empirical Bernstein bounds <cit.>. This approach was taken by <cit.> and could also be applied in our setting. For now, to address the issue of Gaussian noise, we experiment with non-Gaussian noise in <ref>. Second, while our error bounds depend on all parameters of interest as expected, we do not provide a matching lower bound. When the reward variances are known, we believe that a lower bound can be proved by building on the work of <cit.>. Finally, our algorithms are not contextual, which limits their application because many bandit problems are contextual <cit.>.
§ PROOF OF THEOREM <REF>
First, we decompose the probability of choosing a suboptimal arm. For any s ∈ [m], let E_s = 1 ∈_s + 1 be the event that the best arm is not eliminated in stage s and E̅_s be its complement. Then by the law of total probability,
Î≠ 1
= E̅_m
= ∑_s = 1^m E̅_s, E_s - 1…, E_1≤∑_s = 1^m E̅_sE_s - 1…, E_1 .
We bound E̅_sE_s - 1…, E_1 based on the observation that the best arm can be eliminated only if the estimated mean rewards of at least a half of the arms in _s are at least as high as that of the best arm. Specifically, let _s' = _s ∖1 be the set of all arms in stage s but the best arm and
N_s'
= ∑_i ∈_s'μ̂_s, i≥μ̂_s, 1 .
Then by the Markov's inequality,
E̅_sE_s - 1…, E_1≤N_s' ≥n_s/2E_s - 1…, E_1≤2 N_s'E_s - 1…, E_1/n_s .
The key step in bounding the above expectation is understanding the probability that any arm has a higher estimated mean reward than the best one. We bound this probability next.
For any stage s ∈ [m] with the best arm, 1 ∈_s, and any suboptimal arm i ∈_s, we have
μ̂_s, i≥μ̂_s, 1≤exp[- n_s Δ_i^2/4 ∑_j ∈_sσ_j^2] .
The proof is based on concentration inequalities for sub-Gaussian random variables <cit.>. In particular, since μ̂_s, i - μ_i and μ̂_s, 1 - μ_1 are sub-Gaussian with variance proxies σ_i^2 / N_s, i and σ_1^2 / N_s, 1, respectively; their difference is sub-Gaussian with a variance proxy σ_i^2 / N_s, i + σ_1^2 / N_s, 1. It follows that
μ̂_s, i≥μ̂_s, 1 = μ̂_s, i - μ̂_s, 1≥ 0
= (μ̂_s, i - μ_i) - (μ̂_s, 1 - μ_1) > Δ_i
≤exp[- Δ_i^2/2 (σ_i^2/N_s, i + σ_1^2/N_s, 1)]
= exp[- n_s Δ_i^2/4 ∑_j ∈_sσ_j^2] ,
where the last step follows from the definitions of N_s, i and N_s, 1 in <ref>.
The last major step is bounding N_s'E_s - 1…, E_1 with the help of <ref>. Starting with the union bound, we get
N_s'E_s - 1…, E_1 ≤∑_i ∈_s'μ̂_s, i≥μ̂_s, 1≤∑_i ∈_s'exp[- n_s Δ_i^2/4 ∑_j ∈_sσ_j^2]
≤ n_s max_i ∈_s'exp[- n_s Δ_i^2/4 ∑_j ∈_sσ_j^2]
= n_s exp[- n_s min_i ∈_s'Δ_i^2/4 ∑_j ∈_sσ_j^2] .
Now we chain all inequalities and get
Î≠ 1≤ 2 ∑_s = 1^m exp[- n_s min_i ∈_s'Δ_i^2/4 ∑_j ∈_sσ_j^2] .
To get the final claim, we use that
m
= log_2 K ,
n_s
= n/log_2 K , min_i ∈_s'Δ_i^2
≥Δ_min^2 , ∑_j ∈_sσ_j^2
≤∑_j ∈σ_j^2 .
This concludes the proof.
§ PROOF OF THEOREM <REF>
This proof has the same steps as that in <ref>. The only difference is that N_s, i and N_s, 1 in <ref> are replaced with their lower bounds, based on the following lemma.
Fix stage s and arm i ∈_s in . Then
N_s, i≥σ_i^2/σ_max^2(n_s/_s - 1) ,
where σ_max = max_i ∈σ_i is the maximum reward noise and n_s is the budget in stage s.
Let J be the most pulled arm in stage s and ℓ∈ [n_s] be the round where arm J is pulled the last time. By the design of , since arm J is pulled in round ℓ,
σ_J^2/N_s, ℓ, J≥σ_i^2/N_s, ℓ, i
holds for any arm i ∈_s. This can be further rearranged as
N_s, ℓ, i≥σ_i^2/σ_J^2 N_s, ℓ, J .
Since arm J is the most pulled arm in stage s and ℓ is the round of its last pull,
N_s, ℓ, J
= N_s, J - 1
≥n_s/_s - 1 .
Moreover, N_s, i≥ N_s, ℓ, i. Now we combine all inequalities and get
N_s, i≥σ_i^2/σ_J^2(n_s/_s - 1) .
To eliminate dependence on random J, we use σ_J ≤σ_max. This concludes the proof.
When plugged into <ref>, we get
μ̂_s, i≥μ̂_s, 1≤exp[- Δ_i^2/2 (σ_i^2/N_s, i + σ_1^2/N_s, 1)]
≤exp[- (n_s/_s - 1) Δ_i^2/4 σ_max^2] .
This completes the proof.
§ PROOF OF THEOREM <REF>
This proof has the same steps as that in <ref>. The main difference is that N_s, i and N_s, 1 in <ref> are replaced with their lower bounds, based on the following lemma.
Fix stage s and arm i ∈_s in . Then
N_s, i≥σ_i^2/σ_max^2α(_s, n_s, δ)
(n_s/_s - 1) ,
where σ_max = max_i ∈σ_i is the maximum reward noise, n_s is the budget in stage s, and
α(k, n, δ)
= 1 - 2 √(log(1 / δ)/n / k - 2)/1 + 2 √(log(1 / δ)/n / k - 2) +
2 log(1 / δ)/n / k - 2
is an arm-independent constant.
Let J be the most pulled arm in stage s and ℓ∈ [n_s] be the round where arm J is pulled the last time. By the design of , since arm J is pulled in round ℓ,
U_s, ℓ, J/N_s, ℓ, J≥U_s, ℓ, i/N_s, ℓ, i
holds for any arm i ∈_s. Analogously to (<ref>), this inequality can be rearranged and loosened as
N_s, i≥U_s, ℓ, i/U_s, ℓ, J(n_s/_s - 1) .
We bound U_s, ℓ, i from below using the fact that U_s, ℓ, i≥σ_i^2 holds with probability at least 1 - δ, based on the first claim in <ref>. To bound U_s, ℓ, J, we apply the second claim in <ref> to bound σ̂_s, ℓ, J^2 in U_s, ℓ, J, and get that
U_s, ℓ, J≤σ_J^2 1 + 2 √(log(1 / δ)/N_s, ℓ, J - 1) +
2 log(1 / δ)/N_s, ℓ, J - 1/1 - 2 √(log(1 / δ)/N_s, ℓ, J - 1)
holds with probability at least 1 - δ. Finally, we plug both bounds into (<ref>) and get
N_s, i≥σ_i^2/σ_J^21 - 2 √(log(1 / δ)/N_s, ℓ, J - 1)/1 + 2 √(log(1 / δ)/N_s, ℓ, J - 1) +
2 log(1 / δ)/N_s, ℓ, J - 1(n_s/_s - 1) .
To eliminate dependence on random J, we use that σ_J ≤σ_max and N_s, ℓ, J≥ n_s / _s - 1. This yields our claim and concludes the proof of <ref>.
Similarly to <ref>, this bound is asymptotically tight when all reward variances are identical. Also α(_s, n_s, δ) → 1 as n_s →∞. Therefore, the bound has the same shape as that in <ref>.
The application of <ref> requires more care. Specifically, it relies on high-probability confidence intervals derived in <ref>, which need N_s, t, i > 4 log(1 / δ) + 1. This is guaranteed whenever n ≥ K log_2 K (4 log(1 / δ) + 1). Moreover, since the confidence intervals need to hold in any stage s and round t, and for any arm i, we need a union bound over K n events. This leads to the following claim.
Suppose that n ≥ K log_2 K (4 log(1 / δ) + 1). Then, when <ref> is plugged into <ref>, we get that
μ̂_s, i≥μ̂_s, 1≤exp[- Δ_i^2/2 (σ_i^2/N_s, i + σ_1^2/N_s, 1)]
≤exp[- α(_s, n_s, K n δ)
(n_s/_s - 1) Δ_i^2/4 σ_max^2] .
This completes the proof.
|
http://arxiv.org/abs/2306.10442v1
|
20230618001223
|
Partial data inverse problem for hyperbolic equation with time-dependent damping coefficient and potential
|
[
"Boya Liu",
"Teemu Saksala",
"Lili Yan"
] |
math.AP
|
[
"math.AP",
"35R30, 35L05, 58J45"
] |
thmTheorem[section]
cor[thm]Corollary
lem[thm]Lemma
prop[thm]Proposition
exmp[thm]Example
defn[thm]Definition
rem[thm]Remark
axAxiom
definition
equationsection
supp diam distdiag det⟨⟨|⟩ε∂δ̣_̣α̣ΛẼB̃ℳℱ𝒵f̂ĥδinverse problem
supp diam distdiag det⟨⟨|⟩⟨⟨⟩⟩ε∂
Partial data inverse problem for hyperbolic equation]
Partial data inverse problem for hyperbolic equation with time-dependent damping coefficient and potential
Liu]Boya LiuB. Liu, Department of Mathematics
North Carolina State University, Raleigh
NC 27695, [email protected]]Teemu SaksalaT. Saksala, Department of Mathematics
North Carolina State University, Raleigh
NC 27695, [email protected]]Lili YanL. Yan, School of Mathematics, University of Minnesota, Minneapolis, MN 55455, [email protected]
We study an inverse problem of determining time-dependent damping coefficient and potential appearing in the wave equation in a compact Riemannian manifold of dimension three or higher. More specifically, we are concerned with the case of conformally transversally anisotropic manifolds, or in other words, compact Riemannian manifolds with boundary conformally embedded in a product of the Euclidean line and a transversal manifold. With an additional assumption of the attenuated geodesic ray transform being injective on the transversal manifold, we prove that the knowledge of a certain partial Cauchy data set determines time-dependent damping coefficient and potential uniquely.
[
[
July 31, 2023
=================
§ INTRODUCTION AND STATEMENT OF RESULTS
This paper is devoted to an inverse problem of a hyperbolic initial boundary value problem, with the aim to determine lower order time-dependent perturbations, namely, a scalar-valued damping coefficient and potential, of a Riemannian wave operator from a set of partial Cauchy data. As introduced in <cit.>, from the physical point of view, this inverse problem is concerned with determining properties such as the time-evolving damping force and the density of an inhomogeneous medium by probing the medium with disturbances generated on the lateral boundary and at the initial time, and by measuring the response at the end of the experiment as well as on some part of the lateral boundary.
To state the inverse problem considered in this paper, let (M, g) be a smooth, compact, oriented Riemannian manifold of dimension n ≥ 3 with smooth boundary M. We denote Q =(0,T)× M^int with 0<T<∞, Q the closure of Q, and Σ = the lateral boundary of Q. Recall that the Laplace-Beltrami operator Δ_g of the metric g acts on C^2-smooth functions according to the following expression in local coordinates x_1,…,x_n of the manifold M
Δ_g v(x)=|g|^-1/2(x)_x^j(g^jk(x)|g(x)|^1/2_x^kv(x)), x ∈ M.
Here |g| and g^jk denote the absolute value of the determinant and the inverse of g_jk, respectively.
For a given smooth and strictly positive function c(x) on M, we consider the wave operator
_c,g=c(x)^-1_t^2-Δ_g,
whose coefficients are time-independent. Our goal is to study an inverse problem for the following linear hyperbolic partial differential operator
ℒ_c,g,a,q=_c,g+a(t,x)_t+q(t, x), (t,x)∈ Q,
with time-dependent lower order coefficients a∈ W^1,∞(Q) (the damping coefficient) and q∈ C(Q) (the potential).
Our first geometric assumption is the following:
A Riemannian manifold (M,g) of dimension n≥ 3 with boundary M is called conformally transversally anisotropic (CTA) if M is a compact subset of a manifold × M_0^int and g= c(e ⊕ g_0), where (,e) is the real line, (M_0,g_0) is a smooth compact (n-1)-dimensional Riemannian manifold with smooth boundary, called the transversal manifold, and c∈ C^∞(× M_0) is a strictly positive function.
Examples of CTA manifolds include precompact smooth proper subsets of Euclidean, spherical, and hyperbolic spaces. We refer readers to <cit.> for more examples. Due to the global product structure of M, we can write every point x∈ M in the form x=(x_1,x'), where x_1 ∈ and x'∈ M_0. In particular, the projection φ(x) = x_1 is a limiting Carleman weight. It was established in <cit.> that the existence of a limiting Carleman weight is equivalent to the existence of a parallel unit vector field for a conformal multiple of the metric. Locally, the latter condition is equivalent to the fact that the manifold (M,g) is conformal to the product of an interval and some Riemannian manifold (M_0,g_0) of one dimension less.
The limiting Carleman weight φ gives a canonical way to define the front and back faces of M and Q. Let ν be the outward unit normal vector to with respect to the metric g. We denote M_± = {x ∈ M: ±_νφ(x) ≥ 0} and Σ_±=(0, T)× M_±^int.
Then we define U=(0, T)× U' and V=(0,T)× V', where U',V'⊂ M are open neighborhoods of M_+, M_-, respectively.
The goal of this paper is to show that time-dependent damping coefficient a(t,x) and potential q(t,x), appearing in (<ref>), can be uniquely determined from the following partial Cauchy data
𝒞_g, a,q = {(u|_Σ, u|_t=0, u|_t=T, _tu|_t=0,_ν u|_V): u∈ H^1(0,T;L^2(M)), ℒ_c,g, a,qu=0}.
We will define this data carefully in Section <ref>.
Notice that in addition to the data measured on the lateral boundary, the set of Cauchy data 𝒞_g, a,q also includes measurements made at the initial time t=0 and the end time t=T. Indeed, it was established in <cit.> that the full lateral boundary data with vanishing initial conditions
𝒞_g,a,q^lat={(u|_Σ, _ν u|_Σ): u∈ H^1(0, T; L^2(M)), ℒ_g,a,qu=0, u|_t=0=_tu|_t=0=0}
determines time-independent damping coefficients and potentials uniquely for T>diam(M), where M is a bounded domain in ^n. However, due to domain of dependence arguments, as explained for instance in <cit.>, it is only possible to recover a general time-dependent coefficient on the optimal set
𝒟={(t, x)∈ Q: dist(x, M)<t<T- dist(x, M)}
from 𝒞_g,a,q^lat. Thus, even for large measurement time T>0, a global recovery of general time-dependent lower order coefficients of the hyperbolic operator (<ref>) needs some additional information
at the beginning {t=0} and at the end {t=T} of the measurement.
Unfortunately, the product structure of the ambient space × M_0 of the manifold (M,g) is not quite sufficient for the recovery method presented in this paper. We need to also assume that certain geodesic ray transforms on the transversal manifold (M_0,g_0) are injective. Such assumptions have been successfully implemented to solve many important inverse problems on CTA manifolds, see for instance <cit.> and the references therein.
Let us now recall some definitions related to geodesic ray transforms on Riemannian manifolds with boundary.
Geodesics of (M_0,g_0) can be parametrized (non-uniquely) by points on the unit sphere bundle SM_0 = {(x, ξ) ∈ TM_0: |ξ|=1}. Moreover, we use the notation
_± SM_0 = {(x, ξ) ∈ SM_0: x ∈ SM_0, ±⟨ξ, ν(x) ⟩ > 0}
for the incoming (–) and outgoing (+) boundaries of SM_0, corresponding to the geodesics touching the boundary. Here ⟨·, ·⟩ is the Riemannian inner product of (M_0,g_0).
Let (x, ξ) ∈_-SM_0, and let γ = γ_x, ξ be a geodesic of M_0 with initial conditions γ(0) = x and γ̇ (0) = ξ. Then τ_exit(x, ξ)>0 stands for the first time when γ meets M_0 with the convention that τ_exit(x, ξ) = +∞ if γ stays in the interior of M_0. We say that a unit speed geodesic segment γ: [0, τ_exit(x, ξ)] → M_0, 0<τ_exit(x, ξ)<∞, is non-tangential if γ(0), γ(τ_exit(x, ξ)) ∈_0, γ̇(0) and γ̇(τ_exit(x, ξ)) are non-tangential vectors to M_0, and γ(τ)∈ M_0^int for all 0<τ<τ_exit(x, ξ).
In this paper, we shall reduce the determination of unknown time-dependent coefficients a(x,t) and q(x,t) from the set of partial Cauchy data (<ref>) to the invertibility of the attenuated geodesic ray transform on the transversal manifold (M_0,g_0). Given a smooth function α on M_0, the attenuated geodesic ray transform of a function f M_0 → is given by
I^α(f)(x, ξ) = ∫_0^τ_exit(x, ξ)exp[∫_0^tα(γ_x, ξ(s))ds] f(γ_x, ξ(t))dt, (x, ξ) ∈_-SM_0 ∖Γ_-,
where Γ_- = {(x, ξ) ∈_-SM_0: τ_exit(x, ξ)= +∞}. Our second geometric assumption is the following:
Assumption 1.
There exists ε>0 such that for each smooth function α on M_0 with α_L^∞(M_0)<ε, the respective attenuated geodesic ray transform I^α on (M_0, g_0) is injective over continuous functions f in the sense that if I^α(f)(x, ξ)=0 for all (x, ξ) ∈_-SM_0 ∖Γ_- such that γ_x, ξ is a non-tangential geodesic, then f=0 in M_0.
It was verified in <cit.> that simple manifolds always satisfy Assumption 1.
Here we say that a compact, simply connected Riemannian manifold with smooth boundary is simple if its boundary is strictly convex, and no geodesic has conjugate points. Also, injectivity of the geodesic ray transform (α=0) on simple manifolds is well-known, see <cit.>.
The main result of this paper is the following.
Let T>0. Suppose that (M, g) is a CTA manifold of dimension n ≥ 3 and that Assumption <ref> holds for the transversal manifold (M_0,g_0). Let a_i ∈ W^1,∞(Q) and q_i ∈ C(Q), i=1, 2. If a_1=a_2 and q_1=q_2 on Q,
then 𝒞_g, a_1, q_1 = 𝒞_g, a_2, q_2 implies that a_1=a_2 and q_1=q_2 in Q.
Theorem <ref> can be viewed as an extension of the uniqueness result in <cit.>, where only the potential was considered, to the case of recovering both damping coefficients and potentials from the set of partial Cauchy data 𝒞_g, a,q. From the perspective of geometric setting,
this paper extends <cit.> from the Euclidean space, as well as <cit.> from CTA manifolds with M_0 simple, to a larger class of CTA manifolds.
§.§ Previous literature
The recovery of coefficients appearing in hyperbolic equations from boundary measurements has attracted lots of attention in recent years. Results in this direction are generally divided into two categories with respect to time-independent and time-dependent coefficients.
Starting with seminal works <cit.>, there has been extensive literature related to the recovery of time-independent coefficients appearing in hyperbolic equations. We refer readers to <cit.> and references therein for some works in this direction. A powerful tool to prove uniqueness results for time-independent coefficients of hyperbolic equations, including the leading order coefficients, is the Boundary Control Method, which was developed in <cit.>, as well as a time-sharp unique continuation theorem proved in <cit.>. We refer readers to <cit.> for an introduction to the method and <cit.> for reviews. However, it was discovered in <cit.> that the unique continuation theorem analogous to <cit.> may fail when the dependence of coefficients on time is not analytic, which means that the Boundary Control Method is not well suited to recover time-dependent coefficients in general.
Aside from the Boundary Control Method, the approach of geometric optics (GO) solutions is also widely utilized to recover time-independent coefficients of hyperbolic equations. Using this approach, the unique recovery of time-independent potential q (with a=0) from full lateral boundary Dirichlet-to-Neumann map was established in <cit.>, and <cit.> extended this result to recovery of time-independent damping coefficients using the same boundary data. A uniqueness result from partial boundary measurements was considered in <cit.>.
The GO solution approach has also been used to obtain stronger stability results <cit.> than the Boundary Control Method <cit.>, but it gives less sharp uniqueness results from the perspective of geometric assumptions than the latter.
Turing attention to the time-dependent category, most of the results in this direction rely on the use of GO solutions. This approach was first implemented in the context of determining time-dependent coefficients of hyperbolic equations from the knowledge of scattering data by using properties of the light-ray transform <cit.>. Recovery of time-dependent potential q from the full lateral boundary data 𝒞_q^lat on the infinite cylinder ×Ω, where Ω is a domain in ^n, was established in <cit.>. On a finite cylinder (0, T)×Ω with T>diam(Ω), it was proved in <cit.> that 𝒞_q^lat determines q uniquely in the optimal set 𝒟 of (0, T)×Ω.
Uniqueness and stability results for determining a general time-dependent potential q from partial data were established in <cit.> and <cit.>, respectively.
Going beyond the Euclidean space, uniqueness results for the time-dependent potential q from both full and partial boundary measurements were established in <cit.> on a CTA manifold (M, g), with a simple transversal manifold M_0, by using the GO solution approach. For more general manifolds, recently it was proved in <cit.> that the set of full Cauchy data uniquely determines the potential q in Lorentzian manifolds satisfying certain two-sided curvature bounds and some other geometric assumptions, and this curvature bound was weakened in <cit.> near Minkowski geometry.
In particular, the proof of <cit.> is based on a new optimal unique continuation theorem and can be viewed as a generalization of the Boundary Control Method to the cases without real analyticity assumptions.
There is also some literature related to determining time-dependent first-order perturbations appearing in hyperbolic equations from lateral boundary data analogous to (<ref>). In the Euclidean setting, <cit.> extended the result of <cit.> to a unique determination of time-dependent damping coefficients and potentials from 𝒞_g,a,q. When the vector field perturbation appears in the wave equation, similar to elliptic operators such as the magnetic Schrödinger operator, one can only recover the first order perturbation up to a gauge invariance, i.e., the differential of a test function in Q, see <cit.> for a uniqueness result when the dependence of coefficients on time is real-analytic, and this analyticity assumption was removed later in <cit.>. Logarithmic type stability estimates for the vector field perturbation as well as the potential were proved in <cit.>. A uniqueness result analogous to <cit.> from partial Dirichlet-to-Neumann map was obtained in <cit.>. In the non-Euclidean setting, it is proved in <cit.> that the hyperbolic Dirichlet-to-Neumann map determines the first order and the zeroth order perturbations up a gauge invariance on certain non-optimal subset of Q by inverting the light-ray transform of the Lorentzian metric -dt^2+g(x) for one forms and functions.
Aside from time-dependent perturbations, it is also possible to recover time-dependent metric g from boundary measurements. Under mild geometric assumptions on a Lorentzian manifold (M, g), it was very recently established in <cit.> that the Dirichlet-to-Neumann map associated with wave operator _g determines the topology and differentiable structure of the space time cylinder, as well as the conformal type of the metric. Furthermore, under more stringent geometric conditions, the metric can be recovered up to an isometry.
Finally, we would like to emphasize that up to our best knowledge, the global recovery of a full first order time-dependent perturbation (a one form and potential function in Q) of the Riemannian wave operator from a partial Cauchy data, and the optimal recovery of these coefficients from the respective hyperbolic Dirichlet-to-Neumann map remain important open problems.
§.§ Outline for the proof of Theorem <ref>
The first main ingredient of the proof is the construction of exponentially growing and decaying complex geometric optics (CGO) solutions to the equation ℒ_c ,g, a, qu=0, of the form
u(t, x)=e^± s(β t+φ(x))(v_s(t,x)+r_s(t,x)), (t, x)∈ Q.
Here s=1/h+iλ is a complex number, h∈ (0,1) is a semiclassical parameter, λ∈ and β∈ (1/√(3), 1) are some fixed numbers, v_s is a Gaussian beam quasimode, and r_s is a correction term that decays with respect to the parameter h. The function φ(x)=x_1 is a limiting Carleman weight on M.
We exploit the existence of the Carleman weight β t+x_1 on Q and derive necessary boundary and interior Carleman estimates, see Proposition <ref> and Lemma <ref>, respectively.
Lemma <ref> is needed to verify the existence of the correction term r_s in Proposition <ref>. On the other hand, since the transversal manifold (M_0,g_0) is not necessarily simple, the approach based on global GO solutions is not applicable in our proof. To medicate this, in Proposition <ref> we construct Gaussian beam quasimodes for every non-tangential geodesic in the transversal manifold M_0 by using techniques developed in solving inverse problems for elliptic operators, see for instance <cit.>, followed by a concentration property for the quasimodes given in Proposition <ref>. The construction of CGO solutions is finalized in Proposition <ref>. In this part we need the regularity conditions imposed to the unknown time-dependent coefficients a and q.
The second main component in the proof is the integral identity (<ref>), whose derivation needs the equivalence of the partial Cauchy data. When the obtained CGO solutions are inserted in the integral identity, the boundary Carleman estimate of Proposition <ref> forces the right-hand side of (<ref>)
to vanish in the limit h→ 0. On the other hand, the concentration property given in Proposition <ref> yields that the left-hand side of (<ref>) converges to the attenuated geodesic ray transform of the Fourier transform (in two Euclidean variables) of the unknown coefficients in the transversal manifold (M_0,g_0). To carry on this reduction step we need the regularity and boundary conditions imposed to the unknown time-dependent coefficients a and q. To invert the attenuated geodesic ray transform we need Assumption <ref>. We first provide a proof for the uniqueness result for the damping coefficient a(t,x), followed by verifying the uniqueness for the potential q(t,x).
This paper is organized as follows. We begin by carefully defining the set of partial Cauchy data (<ref>) in Section <ref>. In Section <ref> we derive boundary Carleman estimates as well as interior Carleman estimates.
In Section <ref> we construct the CGO solutions to the hyperbolic equation ℒ_c,g,a,qu=0 based on Gaussian beam quasimodes in Q. Finally, the proof of Theorem <ref> is presented in Section <ref>.
§.§ Acknowledgements
We would like to express our gratitude to Katya Krupchyk and Lauri Oksanen for valuable discussions and suggestions. T.S. is partially supported by the National Science Foundation (DMS 2204997). L.Y. is partially supported by the National Science Foundation (DMS 2109199).
§ DEFINITION OF THE PARTIAL CAUCHY DATA
The goal of this short section is to recall some properties of the weak solutions to the initial boundary value problem
ℒ_c ,g, a, qu(t, x)=0 in Q,
u(0, x)=h_0(x), _tu(0, x)=h_1(x) in M,
u(t, x)=f(t, x) on Σ,
as introduced in <cit.>.
We define the space
H__c,g(Q)={u∈ H^1(0,T; L^2(M)): _c,g u=(c^-1_t^2-Δ_g)u∈ L^2(Q)},
equipped with the norm
u^2_H__c,g(Q)=u^2_H^1(0,T; L^2(M))+_c,gu^2_L^2(Q).
Our starting point is the following result, originally presented in <cit.>.
The space H__c,g is continuously embedded into the closure of C^∞(Q) in the space
K__c,g(Q)={ u∈ H^-1(0,T; L^2(M)): _c,g u∈ L^2(Q)}
equipped with the norm
u^2_K__c,g(Q)=u^2_H^-1(0,T; L^2(M))+_c,gu^2_L^2(Q).
For each w∈ C^∞(Q), we introduce two linear maps
ι_0w=(ι_0,1w,ι_0,2w,ι_0,3w)=(w|_Σ, w|_t=0, _t w|_t=0),
and
ι_1w=(ι_1,1w,ι_1,2w,ι_1,3w)=(_ν w|_Σ, w|_t=T, _t w|_t=T).
The maps ι_0 and ι_1 defined above can be extended continuously to
ι_0: H__c,g(Q)→ H^-3(0,T; H^-1/2( M))× H^-2(M)× H^-4(M)
and
ι_1: H__c,g(Q)→ H^-3(0,T; H^-3/2( M))× H^-2(M)× H^-4(M),
respectively.
Since the conformal factor c in _c,g is time-independent, the proof is a straightforward modification of the proof for <cit.>.
In this proof I assumed that the conformal factor c=1. The integration by parts formula needs to be adjusted for general c.
Since L^2(0,T;L^2(M)) = L^2(Q), we see that
H__c,g⊂H̃__c,g,
and the claim follows if we can prove the analogous claim for the superset H̃__c,g.
We shall follow the idea from <cit.> and only provide a detailed proof for the map ι_0. The analogous claim for ι_1 follows from similar arguments. Let us begin with proving the claim for ι_0,1. To this end, we recall that by <cit.>, the trace operator
u↦ (u|_ M, _ν u|_ M)
extends to a surjective continuous linear operator from H^2(M) to H^3/2( M)× H^1/2( M) and has a continuous right inverse, i.e., there exists a bounded continuous operator R: H^3/2( M)× H^1/2( M) → H^2(M) such that
R(g_1, g_2)|_ M=g_1, _ν R(g_1, g_2)|_ M=g_2, (g_1, g_2)∈ H^3/2( M)× H^1/2( M).
We now fix a function ϑ̃∈ H_0^3(0,T;H^1/2( M)) and set ϑ̃(t, ·)=R(0,ϑ(t, ·)). Then we have ϑ̃(t, ·)|_ M=0 and _νϑ̃(t, ·)|_ M=ϑ(t, ·). Since ϑ(t, ·)∈ H^1/2( M), we see that ϑ̃(t, ·)∈ H^2(M). Thus, it follows immediately that ϑ̃∈ H_0^3(0,T;H^2(M)) and
ϑ̃_H^3(0,T;H^2(M))≤Rϑ_H^3(0,T;H^1/2( M)).
Applying the Green's formula twice and using ϑ̃∈ H_0^3(0,T;L^2(M)), we obtain
∫_ Q v _νϑ̃-ϑ̃_ν v dσ_gdt = ∫_Σ v ϑ dσ_gdt=∫_Q _c,gv ϑ̃- _c,gϑ̃vdV_gdt, v∈ C^∞(Q).
We observe that _c,gϑ̃∈ H^1_0(0,T;L^2(M)), thus we have
ι_0,1v, ϑ_H^-3(0,T;H^-1/2( M)), H_0^3(0,T;H^1/2( M))= _c,gv, ϑ̃_L^2(Q)-v, _c,gϑ̃_H^-1(0,T;L^2(M)), H_0^1(0,T;L^2(M)).
Hence, using (<ref>) and the Cauchy-Schwartz inequality, the following estimate holds for all v∈ C^∞(Q):
|ι_0,1v, ϑ| ≤_c,gv_L^2(Q)ϑ̃_L^2(Q)+v_H^-1(0,T;L^2(M))_c,gϑ̃_H_0^1(0,T;L^2(M))
≤_c,gv_L^2(Q)ϑ̃_H^3(0,T;H^2(M))+v_H^-1(0,T;L^2(M))ϑ̃_H^3(0,T;H^2(M))
≤ Cv_K__c,g(Q)ϑ_H^3(0,T;H^1/2( M)).
Due to Lemma <ref> the space C^∞(Q) is dense in H__c,g with respect to the norm ·_K__c,g(Q), and we conclude from the estimate above that ι_0,1: v↦ v|_Σ extends continuously to a bounded operator from H̃__c,g to H^-3(0,T;H^-1/2( M)).
Let us next consider the map ι_0,2: v↦ v|_t=0, v∈ C^∞(Q). To this end, let w∈ H_0^2(M) and define W(t,x)=tψ(t)w(x), where ψ∈ C_0^∞(-T, T/2) satisfies 0≤ψ≤ 1 and ψ=1 near t=0. Then we obtain
W|_Σ=_ν W|_Σ=W|_t=0=W|_t=T=_t W|_t=T=_c,gW|_t=0=_c,gW|_t=T=0, _t W|_t=0=w.
Therefore, we have _c,gW∈ H^1_0(0,T;L^2(M)). Applying Green's formula again and using _ν=-_t on the bottom surface {0}× M, we get
∫_ Q v _ν W -W _ν v dσ_gdt = ∫_M -v w dV_g = ∫_Q _c,g W v -_c,gv WdV_gdt, v∈ C^∞(Q).
thus we have
ι_0,2v, w_H^-2(M), H_0^2(M)= v, _c,g W_H^-1(0,T;L^2(M)), H_0^1(0,T;L^2(M))-_c,gv, W_L^2(Q).
Hence, by following similar arguments, we have
|ι_0,2v, w|
≤ Cv_K__c,g(Q)W_H^3(0,T;H^1/2( M)).
We then conclude from the estimate above that ι_0,2: v↦ v|_t=0 extends continuously to a bounded operator from H̃__c,g to H^-2(M).
Finally, we turn our attention to ι_0,3: v↦_tv|_t=0, v∈ C^∞(Q). To this end, let φ∈ H_0^4(M) and define Φ(t, x)=ψ(t)φ(x)+1/2t^2ψ(t)c(x)Δ_gφ(x), where ψ(t) is defined previously in the proof for ι_0,2. Then we obtain
Φ|_Σ=_νΦ|_Σ=_t Φ|_t=0=Φ|_t=T=_t Φ|_t=T=0, Φ|_t=0=φ.
Hence, we obtain _c,gΦ∈ H^1(0,T;L^2(M)). Furthermore, direct computations yield
_c,gΦ|_t=0=0, _c,gΦ|_t=T=0.
Therefore, we have _c,gΦ∈ H^1_0(0,T;L^2(M)). If v ∈ C^∞(Q̅) we get from Green's formula that
∫_M _tv(0,x) φ(x) dV_g
=
∫_ Q v _νΦ -Φ_ν v dσ_gdt
=
∫_Q _c,gv Φ - _c,gΦ vdV_gdt.
This implies that
ι_0,3v, φ_H^-4(M), H_0^4(M)= _c,gv, Φ_L^2(Q)-v, _c,gΦ_H^-1(0,T;L^2(M)), H_0^1(0,T;L^2(M)).
Therefore, we also have
|ι_0,3v, φ|
≤ Cv_K__c,g(Q)Φ_H^3(0,T;H^1/2( M)).
It then follows that ι_0,3: v↦_t v|_t=0 extends continuously to a bounded operator from H̃__c,g to H^-4(M).
Therefore, we have established that the map ι_0 extends continuously to
ι_0: H̃__c,g(Q)→ H^-3(0,T; H^-1/2( M))× H^-2(M)× H^-4(M)
We then define ι_0 on the set H__c,g(Q) as the restriction of ι_0 on H__c,g(Q). This completes the proof of Lemma <ref>.
We note that by the same argument as in <cit.>, the set
𝒥={u∈ H^1(0,T; L^2(M)): _c,g u=0}
is a closed vector subspace of H^1(0,T; L^2(M)), contained in H__c,g(Q). Finally, we record the range of the map ι_0
𝒦:={ι_0w: u∈ H__c,g(Q)}⊂ H^-3(0,T; H^-1/2( M))× H^-2(M)× H^-4(M).
By an analogous argument to the proof for <cit.>, we get the following result.
The linear map ι_0𝒥→𝒦 is a bijection.
By Lemma <ref>, the inverse function ι_0^-1: 𝒦→𝒥 exists, and we can use it to define a norm in 𝒦 via the formula
(f, h_0, h_1)_𝒦=ι_0^-1(f,h_0, h_1)_H^1(0,T; L^2(M)), (f, h_0, h_1)∈𝒦.
We would like to recall that we have defined M_± = {x ∈ M: ±_νφ(x) ≥ 0} and V=(0,T)× V',
where V'⊂ M is an open neighborhood of M_-. We are now ready to state and prove the existence and uniqueness of solutions to the initial boundary value problem (<ref>) with the datum (f,h_0,h_1)∈𝒦.
Let a∈ W^1,∞(Q) and q∈ C(Q). For each datum (f,h_0,h_1)∈𝒦, the initial boundary value problem (<ref>) has a unique weak solution u∈ H^1(0,T; L^2(M)) that satisfies
u_H^1(0,T;L^2(M))≤ C(f,h_0,h_1)_𝒦.
Furthermore, the boundary operator
ℬ_a,q:𝒦→ H^-3(0,T;H^-3/2(V'))× H^-2(M), ℬ_a,q(f,h_0,h_1)=(ι_1,1u|_V, ι_1,2u)
is bounded, and the partial Cauchy data set 𝒞_g,a,q, as in (<ref>), is the graph of the map ℬ_a,q.
The proof is a straightforward modification of the proof of <cit.>.
Since (a_t+q)ι_0^-1(f,h_0,h_1) is in L^2(Q), it follows from <cit.> that the initial boundary value problem (<ref>), for interior source F=(a_t+q)ι_0^-1(f,h_0,h_1) and ι_0u=0, has a unique solution u∈ C^1(0, T; L^2(M))∩ C(0, T; H_0^1(M)).
satisfying
u_C^1([0,T];L^2(M))+u_C([0,T];H_0^1(M)) ≤ C(a_t+q)ι_0^-1(f,h_0,h_1)_L^2(Q)
≤ C(q_L^∞(Q)+a_L^∞(Q))ι_0^-1(f,h_0,h_1)_H^1(0,T;L^2(M))
Since ι_0^-1(f,h_0,h_1)∈ J, the function w=ι_0^-1(f,h_0,h_1)-u ∈ H^1(0,T;L^2(M)) is the unique solution of (<ref>) for F=0. Furthermore, the estimate (<ref>) follows from (<ref>) by definition.
We now prove that the boundary operator ℬ_a,q, as given in (<ref>), is bounded. To this end, we fix (f,h_0,h_1)∈𝒦 and let u∈ H^1(0,T;L^2(M)) be the solution to (<ref>). As _c,gu=-a_tu-qu∈ L^2(Q), the wave u is in H__c,g(Q) with ι_1,1u|_V∈ H^-3(0,T;H^-3/2(V')), and ι_1,2u ∈ H^-2(M). Furthermore, the following estimate holds due to the boundedness of the operator ι_1:
ι_1,1 u|_V^2+ι_1,2u^2 ≤ Cu^2_H__c,g(Q)
≤ C(u^2_H^1(0,T; L^2(M))+a_tu+qu^2_L^2(Q))
≤ C(a^2_L^∞(Q)+q^2_L^∞(Q))u^2_H^1(0,T; L^2(M)).
By combining this with (<ref>), we see that ℬ_a,q is a bounded operator. This completes the proof of Proposition <ref>.
§ CARLEMAN ESTIMATES
Our goal of this section is to prove a boundary Carleman estimate as well as an interior Carleman estimate for the operator ℒ_c,g, a,q conjugated by an exponential weight corresponding to a linear function β t+ x_1, where 0<β<1 is a constant. We shall utilize the boundary Carleman estimate to control boundary terms over subsets of the boundary Q where measurements are not accessible, and the interior Carleman estimates will be used to construct the remainder term for both exponentially decaying and growing CGO solutions.
Let (M, g) be a CTA manifold as defined in Definition <ref>, and let g̃=e⊕ g_0. By the conformal properties of the Laplace-Beltrami operator, we have
c^n+2/4 (-Δ_g) ( u)= -Δ_g̃u- (c^n+2/4Δ_g())u,
see <cit.>. Also, since c is independent of t, we get
c^n+2/4 a_t ( u)=ca _t u, and c^n+2/4_t^2 ( u)=c _t^2 u.
Thus, it follows from (<ref>) and (<ref>) that for the hyperbolic operator ℒ_c,g,a,q, we have
c^n+2/4∘ℒ_c,g,a,q∘ = ℒ_g̃, ã, q̃,
where
ã=ca, q̃=c(q-c^n-2/4Δ_g()).
Hence, by replacing the metric g and coefficients a, q with g̃,ã,q̃, respectively, we can assume that the conformal factor c=1. In this section we shall make use of this assumption and consider the leading order wave operator _e⊕ g_0=_t^2-Δ_e⊕ g_0. Let us denote ℒ_g,a,q the hyperbolic partial differential operator ℒ_c,g,a,q when c=1.
§.§ Boundary Carleman estimate
Due to the damping coefficient, we need use a convexification argument similar to <cit.> to establish the needed boundary Carleman estimate. To elaborate, let us first introduce a new parameter ε>0 that is independent of h and to be determined later. For 0<h< ε < 1, we consider the perturbed weight
φ_± h, ε(t,x)= ±1/h(β t+x_1)-t^2/2ε.
Our first result in this section can be viewed as an extension of <cit.> from the Euclidean setting to that of Riemannian manifolds with dependence on a parameter β. Note that <cit.> is not directly applicable in our case since the parameter β is strictly less than 1.
Let a, q∈ L^∞(Q, ) and u∈ C^2(Q).
If u satisfies the conditions
u|_Σ=u|_t=0=_tu|_t=0=0,
then for all 0< h ≪ε≪ 1, we have
e^-φ_ h, εh^2ℒ_g, a, q (e^φ_ h, εu)^2_L^2(Q) +(4/β-β/2)h^3∇_gu(T,·)^2_L^2(M)+ 3β hu(T,·)^2_L^2(M)
≥ (3β^2-1) h^2/4εu^2_L^2(Q) + β h^3/4_tu(T,·)^2_L^2(M)+h^4/2ε(_tu^2_L^2(Q)+∇_gu^2_L^2(Q))
+h^3∫_Σν_1|_ν u|^2dS_gdt,
and
e^-φ_ -h, εh^2ℒ_g, a, q e^φ_-h, εu^2_L^2(Q)
+ 2(β+1)h^3(∇_gu(T,·)^2_L^2(M)
+ _t u(T,·)^2_L^2(M))
≥(3β^2-1) h^2/4εu^2_L^2(Q)
+h^4/2ε(_tu_L^2(Q)
+∇_gu^2_L^2(Q))
-h^3∫_Σν_1|_ν u|^2dS_gdt,
where φ_± h, ε is given by (<ref>), 1/√(3)≤β < 1, and ν_1:=ν, _x_1_g.
We shall only provide a detailed proof for estimate (<ref>). The derivation of (<ref>) is analogous and therefore omitted.
To proceed, we shall omit the subscripts h, ε in φ_h, ε to simplify the notation. Let us expand the conjugated operator
e^-φh^2ℒ_g, a, qe^φu = e^-φh^2 (_g+ a_t+q)e^φu
= h^2[_t^2u+2_tφ_tu +u_t^2 φ+u(_tφ)^2 - (Δ_g u+ 2∇_g φ, ∇_gu_g +uΔ_gφ.
. + u|∇_gφ|^2)+ a_t u+ au_tφ+qu]
:= P_1u+P_2u+P_3u,
where
P_1u=h^2(_gu+(_tφ)^2u - |∇_gφ|^2u+(_g φ)u), P_2u=h^2(2_tφ_tu -2∇_g φ, ∇_g u_g),
P_3u=h^2(a_tu +(a_tφ)u +qu).
By the triangle inequality, we have
e^-φh^2ℒ_g, a,qe^φu_L^2(Q)^2
≥1/2P_1u+P_2u^2_L^2(Q)-P_3u^2_L^2(Q).
We now estimate the terms on the right-hand side of (<ref>). For the first term, we have
1/2P_1u+P_2u^2_L^2(Q)≥∫_Q (P_1uP_2u)dV_gdt.
Since g(x_1,x')=(dx_1)^2+g_0(x'), we get from direct computations that
_t φ=1/hβ -1/ε t, _t^2 φ= -1/ε, _x_1φ= 1/h, ∇_g φ, ∇_g u_g=1/h_x_1u,
which yield
∫_Q (P_1uP_2u)dV_gdt=
∫_Q
2h^4 [_t^2u(1/hβ-1/ε t)_t u- 1/h_t^2u_x_1u- Δ_g u(1/hβ-1/ε t)_t u.
. + 1/hΔ_g u_x_1u-(1/h^2(1-β^2) +1/ε +2β/hεt-1/ε^2t^2)u ((1/hβ-1/εt)_t u -1/h_x_1u)]
dV_gdt.
Let us proceed to estimate each term on the right-hand side of (<ref>). For the first term, we integrate by parts and use _t u|_t=0=0 to deduce that
2h^4∫_Q (1/hβ-1/ε t ) _t^2u _t u dV_gdt
=
h^4(1/hβ-1/ε T)_tu(T,·)^2_L^2(M)+h^4/ε_tu^2_L^2(Q).
Turning attention to the second term, we note that the Lie bracket [_t,_x_1] vanishes. Thus, we integrate by parts and apply _t u|_t=0=0 to obtain
2h^4∫_Q(-1/h) _t^2u _x_1u dV_gdt = -2h^3∫_M _tu(T,x)_x_1u(T,x)dV_g+h^3 ∫_Q _x_1|_tu|^2dV_gdt.
Since the vector field _x_1 is divergence free, we get from u|_Σ=0 and integration by parts that the last term in the equation above vanishes. Hence, we have the following equality for the second term
2h^4∫_Q(-1/h) _t^2u _x_1u dV_gdt = -2h^3∫_M _tu(T,x)_x_1u(T,x)dV_g.
Before estimating the third term, we recall that in local coordinates (t,(x_j)_j=1^n) of Q we have [_t,_x_j]=0 for every j=1,…,n, and ∇_g u(t,x)=g^ik(x)_x^ku(t,x). Furthermore, since the metric g is time-independent, we have
_t|∇_g u|^2
= 2⟨∇_g _t u, ∇_g u ⟩_g.
Thus, by Green's identities and u|_Σ=0, we obtain
-2h^4∫_Q Δ_gu(1/hβ-1/ε t ) _t u dV_gdt=h^4∫_Q (1/hβ-1/ε t ) _t|∇_g u|^2dV_gdt.
Since u|_t=0=0, it follows immediately that ∇_g u(0,·)=0. Then we integrate by parts to get
∫_Q (1/hβ-1/ε t ) _t|∇_g u|^2dV_gdt=∫_M (1/hβ-1/ε T ) |∇_g u(T, ·)|^2dV_g+∫_Q 1/ε|∇_gu|^2dV_gdt.
Therefore, we have verified that the third term on the right-hand side of (<ref>) satisfies
-2h^4∫_Q Δ_gu(1/hβ-1/ε t )_t u dV_gdt = h^4 (1/hβ-1/ε T)∇_gu(T,·) ^2_L^2(M)+ h^4/ε∇_gu^2_L^2(Q).
We next follow the proof of <cit.> to estimate the fourth term. Since the metric g is x_1-independent, it follows from the Leibniz rule and the local representation of the divergence operator <cit.> that
2_x_1uΔ_g u
=2÷_g(_x_1u∇_g u)- ÷_g(|∇_g u|^2_g _x_1).
Thus, an application of the divergence theorem yields
2h^4∫_Q 1/h_x_1uΔ_gu dV_gdt
= h^3∫_Σ 2_ν u_x_1u- |∇_g u|^2ν, _x_1_gdS_gdt.
Since u|_Σ=0, we see that ∇_g u|_Σ=(_ν u)ν and _x_1u|_Σ=_ν uν, _x_1_g:=_ν u ν_1. Therefore, we have
2h^4∫_Q 1/hΔ_gu _x_1u dV_gdt = h^3 ∫_Σν_1 |_ν u|^2 dS_gdt.
We now turn our attention to the last term of (<ref>). To that end, we
integrate by parts, use ÷_g(_x_1)=0 and u|_Σ = 0 to write
2∫_M u(t,·)_x_1u(t,·) dV_g=∫_M _x_1|u(t,·)|^2 dV_g=∫_ M |u(t,·)|^2 ν_1 dS_g=0.
Hence, by utilizing the condition u|_t=0=0, the last term of (<ref>) can be written as
2h^4∫_Q -(1/h^2(1-β^2) +1/ε +2β/hεt-1/ε^2t^2) u ((1/hβ-1/ε t )_t u -1/h_x_1u) dV_gdt
= -h^4∫_Q (1/h^2(1-β^2) +1/ε +2β/hεt-1/ε^2t^2) (1/hβ-1/ε t) _t|u|^2 dV_gdt
= -h^4 (1-β^2/h^2 + 1/ε + 2β/ε hT-1/ε^2T^2) (1/hβ-1/ε T) u(T,·)_L^2(M)^2
+h^4 ∫_Q (3β^2-1/ε h^2 -1/ε^2 -6β/ε^2 ht+3/ε^3t^2) |u|^2dV_gdt.
We now choose the numbers ε,h>0 such that
0<ε < 3T^2,
1/√(3)<β<1,
and 1/h>max{2T/εβ, 12β T/ε(3β^2-1), 2β T/ε, 1/ε}.
These choices yield h<ε,
3β^2-1/ε h^2 -1/ε^2 -6β/ε^2ht+3/ε^3t^2 ≥3β^2-1/2ε h^2,
and
0<(1-β^2/h^2 + 1/ε + 2β/ε hT-1/ε^2T^2) (1/hβ-1/ε T )
≤3β/h^3.
The choices of h and ε in (<ref>) allow
the term 1/h^2 to absorb the lower order terms when 0<h≪ε≪ 1.
Therefore, we get from these choices of ε and h that
2h^4∫_Q -(1/h^2(1-β^2) +1/ε +2β/hεt-1/ε^2t^2) u ((1/hβ-1/ε t )_t u -1/h_x_1u) dV_gdt
≥ -3β hu(T,·)_L^2(M)^2+ (3β^2-1)h^2/2εu^2_L^2(Q).
By combining estimates (<ref>)–(<ref>) and (<ref>), we obtain
1/2P_1u+P_2u^2
≥ h^4(1/hβ-1/ε T)_tu(T,·)^2_L^2(M)+h^4/ε(_tu^2_L^2(Q)+∇_gu^2_L^2(Q))
-2h^3∫_M _tu(T,x)_x_1u(T,x)dV_g+h^4 (1/hβ-1/ε T )∇_gu(T,·) ^2_L^2(M)
+h^3 ∫_Σν_1|_ν u|^2 dS_gdt + (3β^2-1) h^2/2εu^2_L^2(Q)-3β hu(T,·)_L^2(M)^2.
We sharpen the estimate above
by implementing 1/h> 2T/εβ from (<ref>) and utilizing the following inequality
∫_M _tu(T,x)_x_1u(T,x)dV_g ≤β/8_tu(T,·)_L^2(M)^2 + 8/β∇_g u(T,·)_L^2(M)^2
to obtain
1/2P_1u+P_2u^2 ≥ 1/4β h^3_tu(T,·)^2_L^2(M)+h^4/ε(_tu^2_L^2(Q)+∇_gu^2_L^2(Q))
-h^3(16/β-β/2)∇_g u(T,·)_L^2(M)^2+ (3β^2-1) h^2/2εu^2_L^2(Q)
+h^3 ∫_Σν_1|_ν u|^2 dS_gdt-3β hu(T,·)_L^2(M)^2.
Lastly, we turn our attention to P_3u_L^2(Q). To that end, we deduce from (<ref>), (<ref>), as well as the triangle inequality that
P_3u^2_L^2(Q)≤ h^4(a_tu_L^2(Q) + (a_tφ)u _L^2(Q) + qu_L^2(Q))^2
≤ 3h^4( a^2_L^∞(Q)_t u^2_L^2(Q) +( β^2/h^2a^2_L^∞(Q) +q^2_L^∞(Q))u^2_L^2(Q)).
Here in the last step we have used the inequality (x+y+z)^2≤ 3(x^2+y^2+z^2) for x,y,z ∈.
In addition to the choice 0<ε<3T^2 made in (<ref>), we will further require that
1/2ε≥ 3a^2_L^∞(Q) and 3β^2-1/4ε≥ 3( β^2a^2_L^∞(Q)+q^2_L^∞(Q)).
After combining estimates (<ref>), (<ref>), and (<ref>), we obtain
the claimed estimate (<ref>).
This completes the proof of Proposition <ref>.
We are now ready to state and prove the boundary Carleman estimate.
Let a, q∈ L^∞(Q, ) and v∈ C^2(Q). If v satisfies
v|_Σ=v|_t=0=_tv|_t=0=0,
then for all 0<h ≪ε≪ 1, we have
e^-1/h(β t +x_1)h^2ℒ_g, a,qv_L^2(Q) +(h^3/2)e^-1/h(β T+x_1)∇_gv(T,·)_L^2(M)
+(h^1/2)e^-1/h(β T+x_1)v(T,·)_L^2(M)+(h^3/2)(∫_Σ_- |_νφ||e^-1/h(β t +x_1)_ν v|^2dS_gdt)^1/2
≥(h)e^-1/h(β t +x_1)v_L^2(Q) + (h^3/2)e^-1/h(β T+x_1)_tv(T,·)_L^2(M)+(h^2)(e^-1/h(β t +x_1)_tv_L^2(Q)
+e^-1/h(β t +x_1)∇_gv_L^2(Q)) +(h^3/2)(∫_Σ_+_νφ |e^-1/h(β t +x_1)_ν v|^2dS_gdt)^1/2.
Here φ(x) = x_1, Σ_±=(0, T)× M_±^int, and M_± = {x ∈ M: ±_νφ(x) ≥ 0}.
By using estimate (<ref>) and the fact that 1/2>3β^2-1/10 when 1/√(3)<β<1, we see that the following inequality holds for all u∈ C^2(Q) satisfying u|_Σ=u|_t=0=_tu|_t=0=0:
e^-φ_h, εh^2ℒ_g, a, q e^φ_h, εu^2_L^2(Q) +h^3(4/β-β/2)∇_gu(T,·)^2_L^2(M)+ 3β hu(T,·)^2_L^2(M)
+h^3∫_Σ_- |_νφ||_ν u|^2dS_gdt
≥(3β^2-1) h^2/4εu^2_L^2(Q) + 1/4β h^3_tu(T,·)^2_L^2(M)+h^3∫_Σ_+_νφ|_ν u|^2dS_gdt
+(3β^2-1)h^4/10ε(_tu^2_L^2(Q)+∇_gu^2_L^2(Q)).
We now fix ε>0 that satisfies (<ref>) and let u = exp(-φ_h, ε)v, where φ_h, ε given by (<ref>). Note that (<ref>) implies
_νu|_Σ = exp(-1/h(β t+x_1))exp(t^2/2ε) _νv|_Σ.
Moreover, by direct computations, we get
_t v = (1/hβ -1/ε t) e^φ_h, ε u+ e^φ_h, ε_tu,
∇_g u = -∇_gφ_h, ε u + e^-φ_h, ε∇_g v.
Then we obtain
|e^-1/h (β t +x_1)_tv|^2 ≤ |e^-φ_h, ε_tv|^2≤ 2(|_t u|^2+ β^2/h^2|u|^2)
and
|e^-1/h(β t +x_1)∇_g v|^2≤ |e^-φ_h, ε∇_g v|^2 ≤ 2(|∇_g u|^2+ 1/h^2 |u|^2),
where we have used β/h>β/h - t/ε>0 since 1/h> 2T/βε.
Therefore, we get the following estimates:
e^-1/h (β t +x_1)_tv_L^2(Q)^2 + e^-1/h (β t +x_1)∇_g v_L^2(Q)^2 ≤ 2(_t u_L^2(Q)^2+ ∇_g u_L^2(Q)^2)+ 4/h^2u_L^2(Q)^2,
e^-1/h(β T +x_1)_t v(T,·)_L^2(M)^2≤ 2 _t u(T,·)_L^2(M)^2 + 2/h^2u(T,·)_L^2(M)^2,
and
∇_g u(T,·)_L^2(M)^2≤2/h^2e^T^2/εe^-1/h(β T+x_1)v(T,·)_L^2(M)^2+2e^T^2/εe^-1/h(β T+x_1)∇_g v(T,·)_L^2(M)^2.
By using estimates (<ref>)–(<ref>), we deduce that
(3β^2-1) h^2/4εu^2_L^2(Q) + 1/4β h^3_tu(T,·)^2_L^2(M) + h^3∫_Σ_+_νφ |_ν u|^2dS_gdt
+(3β^2-1)h^4/10ε(_tu^2_L^2(Q)+∇_gu^2_L^2(Q))
≥(3β^2-1) h^2/20εe^-1/h(β t+x_1)v^2_L^2(Q)-1/4β h e^T^2/εe^-1/h (β T+x_1)v(T,·)_L^2(M)^2
+(3β^2-1) h^4/20ε (e^-1/h (β t +x_1)_tv_L^2(Q)^2+ e^-1/h (β t +x_1)∇_g v_L^2(Q)^2)
+h^3∫_Σ_+_νφ|e^-1/h(β t+x_1)_ν v|^2dS_gdt
+1/8β h^3e^-1/h(β T +x_1)_t v(T,·)_L^2(M)^2
Similarly, we have
e^-φ_h, εh^2ℒ_g, a, q e^φ_h, εu^2_L^2(Q) +h^3(4/β-β/2)∇_gu(T,·)^2_L^2(M)+ 3β hu(T,·)^2_L^2(M)
+h^3∫_Σ_- |_νφ||_ν u|^2dS_gdt
≤ e^T^2/εe^-1/h(β t +x_1)h^2ℒ_g, a, q v^2_L^2(Q)+(8/β+ 2β) h e^T^2/εe^-1/h(β t +x_1)v(T,·)_L^2(M)^2
+2h^3(4/β-β/2)e^T^2/εe^-1/h(β T+x_1)∇_g v(T,·)_L^2(M)^2+h^3e^T^2/ε∫_Σ_- |_νφ||e^-1/h(β t+x_1)_ν v|^2dS_gdt.
Finally, we obtain the claimed estimate (<ref>) by combining estimates (<ref>), (<ref>), (<ref>), (<ref>), as well as choosing ε>0 sufficiently small but fixed. This completes the proof of Proposition <ref>.
The Carleman estimate (<ref>) can be extended to any function v∈ℋ:= C^1([0, T]; L^2(M))∩ C([0, T]; H^1(M)) satisfying (<ref>) and _e⊕ g_0v∈ L^2(Q). Indeed, we may approximate f:=_e⊕ g_0v∈ L^2(Q) by a sequence f_j∈ C_0^∞(Q) such that f_j→ f in L^2(Q) as j→∞. If v_j solves _e⊕ g_0v_j=f_j and satisfies v_j|_Σ=v_j|_t=0=_tv_j|_t=0=0, then v_j∈ C^∞(Q) by <cit.>. In particular, the Carleman estimate (<ref>) holds for v_j.
Furthermore, we have
v_j-v_ℋ+ _ν v_j -_ν v_L^2(Σ)≤ C f_j - f_L^2(Q)→ 0, j→ 0
by the energy estimate <cit.> together with <cit.>. Thus, the Carleman estimate extends to v.
§.§ Semiclassical pseudodifferential operators
In this subsection we recall some fundamental concepts of semiclassical pseudodifferential calculus on closed Riemannian manifolds by following the expositions of <cit.> and <cit.>. Let (N,g̃) be a smooth compact n-dimensional Riemannian manifold without boundary.
For each m ∈, the Kohn-Nirenberg symbol class S^m(T^∗ N) consists of smooth functions on the cotangent bundle T^∗ N, which in local coordinates of N are given by
S^m_1,0(T^∗ N)=S^m(T^∗ N)={a(x, ξ)∈ C^∞ (T^∗ N): |_x^α_ξ^β a(x, ξ)|≤ C_αβξ^m-|β|},
where ξ=(1+ |ξ|^2)^1/2. For a parameter dependent symbol a(x,ξ;h), we say that a ∈ S^m(T^∗ N) if the estimate in (<ref>) holds uniformly for every h∈ (0,h_0) and for some h_0>0. A linear operator B C^∞(N) → C^∞(N) is called negligible if its Schwartz kernel K_B∈ C^∞(N× N) locally satisfies the estimate
_x^α_y^β K_B (x,y)= (h^∞) for all α,β∈^n.
A linear map A C^∞(N) → C^∞(N) is a semiclassical pseudodifferential operator of order m ∈ if there exists a ∈ S^m(T^∗ N) such that in local coordinates, the operator A is given by the standard h-quantization
Au(x)=1/(2π h)^n∫∫ e^i/h(x-y)·ξa(x,ξ; h)u(y) dydξ+Bu(x),
where the operator B is negligible, and the operator ψ A φ is negligible for each φ, ψ∈ C^∞(N) with disjoint supports. We denote Ψ^m(N) the set of semiclassical pseudodifferential operators of order m on (N,g).
We recall that the correspondence from an operator to a symbol is not globally well-defined, but there exists a bijective map between the following equivalence classes
Ψ^m(N)/Ψ^m-1(N) → S^m(T^∗ N)/S^m-1(T^∗ N).
The image σ_A(x,ξ;h) of A∈Ψ^m(N) under this map is called the principal symbol of A. These definitions allow us to compose the operators A_j ∈Ψ^m_j(N), j=1,2, and we have A_1A_2 ∈Ψ^m_1+m_2(N) with principal symbol σ_A_1A_2=σ_A_1σ_A_2.
An operator A∈Ψ^m(N) is called elliptic if there is a constant C>0, independent of h, such that the principal symbol satisfies
|σ_A(x,ξ;h)|>1/C⟨ξ⟩^m.
An elliptic semiclassical operator A∈Ψ^m(N) has an inverse R ∈Ψ^-m(N) in the sense that there exists h_0>0 such that for all h∈ (0,h_0) we have RA=AR=I as linear operators on C^∞(M) and
σ_R(x,ξ;h)=σ_A(x,ξ;h)^-1∈ S^-m(T^∗ N)/S^-m-1(T^∗ N).
By <cit.>, the operator
J^s = (1-h^2Δ_g̃)^s/2, s∈,
which is defined by the means of spectral theorem, is elliptic and belongs to the class Ψ^s(N) with principal symbol ⟨ξ⟩^s. We note that for all s_1, s_2∈, we have
J^s_1+s_2 = J^s_1J^s_2, (J^s_1)^-1 = J^-s_1, J^0 = I.
Let φ∈ C^∞(Ñ, ), and let us consider the conjugated operator
P_φ= e^φ/h(-h^2Δ_g)e^-φ/h=-h^2Δ_g-|∇_gφ|^2+2∇_g φ, h∇_g+hΔ_g φ,
with the semiclassical principal symbol
p_φ=|ξ|^2-|dφ|^2+2iξ, dφ∈ C^∞(T^∗Ñ).
Here an in what follows we use ·, · and |·| to denote the Riemannian scalar product and norm both on the tangent and cotangent space.
When (x, ξ)∈ T^∗ M and |ξ|≥ c≫ 1, we have |p_φ(x, ξ) ∼ |ξ|^2 so that P_φ is elliptic at infinity in the semiclassical sense. Similar to <cit.>, we say that φ∈ C^∞(Ñ, ) is a limiting Carleman weight for -h^2Δ_g on (Ñ, g) if dφ 0 on Ñ and the Possion bracket of p_φ and p_φ satisfies
{ p_φ, p_φ}=0 when p_φ=0.
We refer to <cit.> for a characterization of Riemannian manifolds admitting limiting Carleman weights.
We define the semiclassical inner product of order s∈:
(u,v)_H^s_(N) := (J^su,J^sv)_L^2(N), u,v ∈ C^∞(N).
Then the semiclassical Sobolev space H^s_(N) is defined as the completion of C^∞(N) with respect to a related norm. Furthermore, every operator A∈Ψ^m(N)
yields a bounded map A H^s_(N) → H^s-m_(N). Also, we recall that if A is negligible, then the operator norm satisfies
A_H^s_1_(N) → H^s_2_(N)=(h^∞) for all s_1,s_2 ∈.
Finally, we discuss the definition of semiclassical Sobolev spaces on an open subset M ⊂ N. We recall that for u ∈ C^∞(N), the norms
u_H^1_(N)
and
u^2_h:=u^2_L^2(N)+h∇_g̃u^2_L^2(N)
are equivalent. We use the latter to define the semiclassical Sobolev space
H^1_(M)
as a completion of C^∞(M) with respect to the norm ·_h, restricted on M, and H^-1_(M) as the topological dual of C^∞_0(M).
§.§ Interior Carleman estimate
In this subsection we assume that (Q,e⊕ g) is isometrically embedded into a closed Riemannian manifold (N,g'), where g' =e⊕ g in some open neighborhood U ⊂ N of Q.
In order to construct CGO solutions, we also need interior Carleman estimates. Our starting point is the following result.
Let a, q∈ L^∞(Q, ), ε>0, and let φ_± h, ε be the perturbed weight defined by (<ref>). Then there exists a constant C>0 that depends on β such that for all 0<h≪ε≪ 1 and u∈ C^∞_0(Q), we have
h/√(ε)u_H^1_(N)≤ Ce^-φ_± h, εh^2ℒ_ g, a, qe^φ_± h, εu_L^2(N).
Since u ∈ C^∞_0(Q), this claim follows immediately from Proposition <ref> by taking C = 2/√(3β^2 - 1).
To prove the existence of suitable solutions to ℒ_g,a,qu=0 in Q, we need to shift the Sobolev index in (<ref>) by -1. This is accomplished in the following lemma.
Let a, q∈ L^∞(Q, ). Then for all 0<h ≪ε≪ 1 and u∈ C^∞_0(Q), there exists a constant C>0 such that
hu_L^2(N)≤ Ce^∓1/h(β t +x_1)h^2ℒ_g,a,qe^±1/h(β t+x_1)u_H^-1_(N).
We shall follow the arguments from <cit.>, see also <cit.>. Let χ∈ C^∞_0(U) be such that χ = 1 in Q. Thanks to (<ref>), for all 0<h ≪ε≪ 1, we have
u_L^2(N)
≤χ J^-1 u _H^1_(N) + (1-χ)J^-1 u _H^1_(N)
≤χ J^-1 u _H^1_(N)+(h^∞)u_L^2(N).
By taking h small enough,
we may absorb the error term (h^∞)u_L^2(N) into the left-hand side and obtain
u_L^2(N)≤χ J^-1 u _H^1_(N).
Let us consider the convexified operator P_φ_± h, ε = e^-φ_± h, εh^2ℒ_g, a, qe^φ_± h, ε∈Ψ^2(N), and note that an application of <cit.> yields that the commutator satisfies the equation [P_φ_± h, ε,J]=hR_1, where R_1 ∈Ψ^2(N).
Therefore, for every v∈ C^∞(N) we can apply (<ref>) to get
P_φ_± h, ε J v _H^-1_(N) = J^-1 P_φ_± h, ε J v_L^2(N) = (P_φ_± h, ε+J^-1hR_1)v_L^2(N).
By (<ref>), (<ref>), and the triangle inequality, we have
P_φ_± h, ε J v _H^-1_(N)
≥h/C√(ε)v_H^1_(N) - hv_H^1_(N)≥h/2C√(ε)v_H^1_(N),
where we have chosen ε >0 such that 1/2C√(ε)>1.
Then we apply (<ref>) with v = χ J^-1u and utilize [P_φ_± h, ε, J χ J^-1]=hR_2, where R_2∈Ψ^1(N), to deduce from (<ref>) that
u_L^2(N)≤
(√(ε)/h)P_φ_± h, εJ χ J^-1u _H^-1_(N)
≤
(√(ε)/h)(J χ J^-1 P_φ_± h, ε u _H^-1_(N)+h R_2u_H^-1_(N))
≤ (√(ε)/h) ( P_φ_± h, ε u _H^-1_(N)+hu_L^2(N)).
Finally, we set w = e^-t^2/2εu for t≥ 0 to get
w_L^2(N) ≤u_L^2(N)
≤(√(ε)/h) (e^t^2/2ε e^∓1/h(β t +x_1)h^2ℒ_g,a,qe^±1/h(β t+x_1)w _H^-1_(N)+he^t^2/2εw_L^2(N))
≤(√(ε)/h) (e^T^2/2εe^∓1/h(β t +x_1)h^2ℒ_g,a,qe^±1/h(β t+x_1)w_H^-1_(N)+he^T^2/2εw_L^2(N)).
We now take ε small enough but fixed to absorb the error term he^T^2/2εw_L^2(N) into the left-hand side. This completes the proof of Proposition <ref>.
The following interior Carleman estimate will be implemented in the next section to construct CGO solutions for the operator ℒ_g,a,q.
Let a ∈ W^1,∞(Q), q ∈ L^∞(Q, ), and s = 1/h +iλ with λ∈ fixed. If h>0 is small enough, then for all v∈ L^2(Q) there exists a solution u∈ H^1_(Q) to the equation
e^± s(β t+x_1)h^2ℒ_g, a, qe^∓ s(β t+x_1)u = v in Q
such that
u_H^1_(Q)≤(h^-1)v_L^2(Q).
The proof uses standard functional analysis arguments, which have been utilized to prove analogous results for elliptic operators, see for instance <cit.>, as well as hyperbolic operators in <cit.>. We provide the detailed proof here for the convenience of the reader.
Using s=1/h+iλ, we rewrite (<ref>) as
e^±1/h(β t+x_1)h^2ℒ_g, a, qe^∓1/h(β t+x_1)ũ = ṽ in Q,
for ũ =e^∓ iλ (β t +x_1)u, ṽ = e^∓ iλ (β t +x_1)v.
Thus, it suffices to find a solution ũ∈ H^1_(Q) of (<ref>) that also satisfies
ũ_H^1_(Q)≤(h^-1)ṽ_L^2(Q) for a given ṽ∈ L^2(Q).
We note that the formal L^2-adjoint of the operator ℒ_g,a,q is
ℒ_g,a,q^∗=ℒ_g, -a, q-_t a, then we set
𝒫_±=e^±1/h(β t+x_1)h^2ℒ_g, a, qe^∓1/h(β t+x_1),
with 𝒫_±^∗ = e^∓1/h(β t+x_1)ℒ_g,a,q^∗ e^±1/h(β t+x_1),
and consider the space 𝒮 ={𝒫_±^∗ w: w∈ C_0^∞(Q)} as a subspace of H^-1_(N). If w∈ C^∞_0(Q), then by estimate (<ref>) and Cauchy–Schwartz inequality, we see that
|⟨ w,ṽ⟩_L^2(Q)|
≤w_L^2(N)ṽ_L^2(Q)≤𝒪(h^-1)𝒫_±^∗ w_H_^-1(N)ṽ_L^2(Q).
Hence, we may define a H_^-1(N)-continuous linear form L on 𝒮 by setting
L(𝒫_±^∗ w) := ⟨ w,ṽ⟩_L^2(Q).
By the Hahn-Banach theorem, we can extend the operator L to a continuous linear form L̃ on H^-1_(N) without increasing the operator norm. Therefore, Riesz representation theorem gives a unique function ũ∈ H^1_(N) that satisfies L̃(f) = ⟨ f, ũ⟩_H^-1, H^1 for all f∈ H^-1_(N), and
ũ_H_^1(N)≤𝒪(h^-1)ṽ_L^2(Q).
Finally, we restrict ũ to Q and use the same notation for the restriction. Since a ∈ W^1,∞(Q) and q ∈ L^∞(Q), for a given function w∈ C^∞_0(Q) we set f = 𝒫_±^∗ w ∈ L^2(Q) and get
⟨ w,ṽ⟩_L^2(Q)=L̃(𝒫^∗_± w)
=
⟨ f, ũ⟩_H^-1, H^1
=
⟨𝒫_±^∗ w, ũ⟩_L^2(Q)
=⟨ w, 𝒫_±ũ⟩_L^2(Q).
Hence, we have ṽ=𝒫_±ũ and ũ_H_^1(N)≤𝒪(h^-1)ṽ_L^2(Q). Since C_0^∞(Q) is dense in L^2(Q), the proof of Proposition <ref> is complete.
§ CONSTRUCTION OF COMPLEX GEOMETRIC OPTIC SOLUTIONS BASED ON GAUSSIAN BEAM QUASIMODES
Let (M,g) be a CTA manifold, a ∈ W^1,∞(Q), and q ∈ C(Q). Let 0<h <1, λ∈, and s=1/h+iλ∈. In the first part of this section we shall assume that the conformal factor c=1 and write ℒ_g,a,q for ℒ_c,g,a,q in this case. The goal of this section is to construct an exponentially decaying, with respect to the real part of s, solution to the equation ℒ_g, a,q^*u_1=0 in Q of the form
u_1=e^-s(β t+x_1)(v_1+r_1),
as well as an exponentially growing solution to the equation ℒ_g,a,qu_2=0 in Q of the form
u_2=e^s(β t+x_1)(w_2+r_2).
Here v_1=v_1,s and w_2=w_2,s are smooth Gaussian beam quasimodes, and r_1=r_1,s, r_2=r_2,s are correction terms that vanish in the limit h→ 0. We emphasize that the CGO solutions above are constructed under the assumption that c=1. We shall incorporate general conformal factors c and modify our CGO solutions accordingly in Subsection <ref>.
We write x=(x_1, x') for coordinates in × M_0, globally in and locally in M_0. To justify our construction we note that a function u_1 of the form (<ref>) solves the equation ℒ_g,a,qu_1=0 if
e^s(β t+x_1)ℒ_g, a, qe^-s(β t+x_1)r_1=-e^s(β t+x_1)ℒ_g, a,qe^-s(β t+x_1)v_1.
§.§ Construction of Gaussian beam quasimides
In this subsection we focus on constructing Gaussian beam quasimodes with desirable concentration properties. Initially introduced in <cit.>, the construction of Gaussian beam quasimodes has a very long tradition in spectral theory and microlocal analysis, see also <cit.>. Gaussian beam quasimodes have also been used extensively to solve inverse problems, starting with <cit.>. Among the literature in this direction, we refer readers to <cit.> for applications in elliptic operators and <cit.> in hyperbolic operators.
Let (M, g) be a CTA manifold with the conformal factor c=1, and let T>0. Replacing the transversal manifold (M_0, g_0) by a slightly larger manifold if necessary, we may assume without loss of generality that (M, g)⊂ (× M_0^int, e⊕ g_0). The Gaussian beam quasimodes will be constructed in ×× M_0^int.
To obtain C^∞-smooth Gaussian beam quasimodes, we shall regularize the damping coefficient and explain the necessity to do so in the proof of Proposition <ref>.
To that end, we extend a into W^1,∞(^2× M_0^int) with compact support.
Using a partition of unity argument combined with a regularization in each coordinate patch, we have the following result, see <cit.> for details.
For any a∈ W_0^1,∞(^2× M_0^int), there exists an open and bounded set W⊂^2× M_0^int and a family a_ζ∈ C_0^∞ (W, ) such that
a-a_ζ_L^∞=o(1), a_ζ_L^∞=(1), ∇_g a_ζ_L^∞=o(ζ^-1),
_t a_ζ_L^∞=o(ζ^-1), _t^2 a_ζ_L^∞=o(ζ^-2), Δ_g a_ζ_L^∞=o(ζ^-2), ζ→ 0.
Here the L^∞-norms are taken over the set ^2× M_0^int.
Throughout the rest of this paper we shall use the notation ℒ_g, a, q^∗ =ℒ_g, -a, q-_ta for the formal L^2-adjoint of the operator ℒ_g, a, q. We are now ready to state and prove our first main result in this section.
Let (M, g) be a smooth CTA manifold with boundary, T>0, β∈ (1/√(3),1), and let s=1/h+iλ, 0<h≪ 1, λ∈ fixed.
Let a ∈ W^1,∞(Q) and q ∈ C(Q). Then for every unit speed non-tangential geodesic γ of (M_0,g_0) there exist one parameter families of Gaussian beam quasimodes v_s, w_s∈ C^∞(^2× M_0) such that the estimates
v_s_L^2(Q)=(1),
_t v_s_L^2(Q)=o(h^-1/2),
e^s(β t+x_1)h^2ℒ^∗_g, a, qe^-s(β t+x_1)v_s_L^2(Q) =o(h),
and
w_s_L^2(Q)=(1),
_t w_s_L^2(Q)=o(h^-1/2),
e^-s(β t+x_1)h^2ℒ_g, a, qe^s(β t+x_1)w_s_L^2(Q)=o(h)
are valid as h→ 0.
We shall follow the main ideas from <cit.> and modify the argument in accordance with the extra time variable t. Let L>0 be the length of the geodesic γ=γ(τ). By following <cit.>, we can embed (M_0, g_0) into a closed manifold (M̂_0, g_0) of the same dimension. We also extend γ as a unit speed geodesic in M̂_0. Since γ is non-tangential, we can choose ε>0 so that γ(τ)∈M̂_0∖ M_0 and does not self-intersect for τ∈ [-2ε, 0) ∪ (L, L+2ε].
We aim to construct Gaussian beam quasimodes near γ([-ε, L+ε]). To that end, we start by fixing a point z_0=γ(τ_0) on γ([-ε, L+ε]) and construct the quasimode locally near z_0. Let (τ, y) ∈Ω:={(τ, y)∈×^n-2: |τ-τ_0|<δ, |y|<δ'}, δ, δ'>0, be Fermi coordinates near z_0, see <cit.>. We may assume that the coordinates (τ, y) extend smoothly to a neighborhood of Ω.
We note that near z_0=γ(τ_0) the trace of the geodesic γ is given by the set Γ={(τ, 0): |τ-τ_0|<δ}. Furthermore, in these Fermi coordinates we have
g_0^jk(τ,0)=δ^jk and _y_lg_0^jk(τ, 0)=0.
Hence, by Taylor's theorem, for small |y| we have
g_0^jk(τ, y)=δ^jk+𝒪(|y|^2).
We shall begin by constructing quasimodes v_1,s=v_s for the conjugated operator
e^s(β t+x_1)ℒ_g, a, q^∗ e^-s(β t+x_1).
Let us consider a Gaussian beam ansatz
v_s(t, x_1, τ, y;h, ζ)=e^isΘ(τ,y) b(t, x_1, τ, y;h, ζ),
where our aim is to find the phase function Θ∈ C^∞(Ω, ) that satisfies
Θ≥ 0, Θ|_Γ=0, Θ(τ, y) ∼ |y|^2,
and an amplitude b∈ C^∞(××Ω, ) such that (b(t, x_1, ·)) ⊂{|y|<δ'/2}. Our construction will follow the ideas originally presented in <cit.>.
Since Θ is independent of t, we have
e^-isΘa_t(e^isΘ b)=a_t b and e^-isΘ_t^2(e^isΘ b)=_t^2b.
Also, as Θ is independent of x_1 and g=e⊕ g_0, we get
e^-isΘ(-Δ_g)e^isΘ b=-Δ_g b-is[2⟨∇_g_0Θ, ∇_g_0b(x_1, ·)⟩_g_0+(Δ_g_0Θ)b]+s^2⟨∇_g_0Θ, ∇_g_0Θ⟩_g_0b.
Using (<ref>) and (<ref>), we obtain
e^s(β t+x_1)ℒ_g, -a, q-_t ae^-s(β t+x_1)v_s
=
e^isΘ[e^-isΘe^s(β t+x_1)ℒ_g, -a, q-_t a e^-s(β t+x_1)e^isΘb]
=
e^isΘ[s^2(⟨∇_g_0Θ, ∇_g_0Θ⟩_g_0 - (1 - β^2))b
+
s(2_x_1b-2β_tb - 2i⟨∇_g_0Θ, ∇_g_0b(x_1, ·)⟩_g_0 - i(Δ_g_0Θ)b+ βa b)
+ ℒ_g, -a, q-_t ab].
This computation suggests that in order to verify the estimates in (<ref>), we should construct the phase function Θ and the amplitude b such that they approximately solve the eikonal and transport equations appearing on right-hand side of (<ref>) as multipliers of the terms s^2 and s, respectively.
Arguing similarly as in <cit.>, we find Θ(τ, y)∈ C^∞(Ω, ) that satisfies
⟨∇_g_0Θ, ∇_g_0Θ⟩_g_0-(1-β^2)=𝒪(|y|^3), y→ 0,
and
Θ≥ d|y|^2
for some constant d>0 depending on β. As explained in <cit.>, we can choose
Θ(τ, y)=√(1-β^2)(τ+1/2H(τ)y· y),
where the smooth complex valued symmetric matrix H(τ), with H(τ) positive definite, is the unique solution of the initial value problem for the matrix Riccati equation
Ḣ(τ)+H(τ)^2=F(τ), H(τ_0)=H_0, for τ∈.
Here H_0 is a complex symmetric matrix such that (H_0) is positive definite, and F(τ) is a suitable symmetric matrix, see for instance <cit.> for details.
We next seek an amplitude b of the form
b(t, x_1, τ, y; h, ζ)=h^-n-2/4b_0(t, x_1, τ; ζ)χ(y/δ'),
where b_0∈ C^∞ (_t×_x_1× [τ_0-δ, τ_0+δ]) is independent of y and satisfies the approximate transport equation
2_x_1b_0-2β_tb_0 - 2i⟨∇_g_0Θ, ∇_g_0b_0(x_1, ·)⟩_g_0 - i(Δ_g_0Θ)b_0 + βa_ζ b_0 =𝒪(|y|ζ^-1)
as y,ζ→ 0. We shall make the expression |y|ζ^-1 rigorous a bit later when we prove estimate (<ref>). The cut off function χ∈ C^∞_0(^n-2) in (<ref>) is chosen such that χ=1 for |y|≤ 1/4 and χ=0 for |y|≥ 1/2.
In order to find b_0 such that (<ref>) holds, we first compute ⟨∇_g_0Θ, ∇_g_0b_0(x_1, ·)⟩_g_0. It follows from (<ref>) that
_τΘ(τ, y)=√(1-β^2)+𝒪(|y|^2).
Therefore, we get from (<ref>) that
⟨∇_g_0Θ, ∇_g_0b_0(x_1, ·)⟩_g_0=√(1-β^2)(_τ b_0+H(τ)y·_yb_0)+𝒪(|y|^2)_τ b_0+
𝒪(|y|^2)_y b_0.
We next compute Δ_g_0Θ near the geodesic γ. Using (<ref>) and (<ref>), we have
(Δ_g_0Θ)(τ, 0) = δ^jkH_jk = H(τ),
which implies that
(Δ_g_0Θ)(τ, y)=√(1-β^2) H(τ)+𝒪(|y|).
Finally, we Taylor expand the coefficients occurring on the left-hand side of (<ref>). Writing
a_ζ(t, x_1, τ, y)
=a_ζ(t, x_1, τ, 0)+∫_0^1 (∇_ya_ζ(t, x_1, τ, ys))yds,
and utilizing (<ref>), we get
a_ζ(t, x_1, τ, y)=a_ζ(t, x_1, τ, 0)+(|y|ζ^-1).
To achieve (<ref>), we require that b_0(t, x_1, τ; ζ) satisfies
(β_t-_x_1+i√(1-β^2)_τ)b_0=1/2[-i√(1-β^2) H(τ)+βa_ζ(t, x_1, τ, 0)]b_0.
We next perform a change of variables to write the left-hand side of (<ref>) as a -equation. To that end, let S: ^3→^3 be a function such that for a fixed β∈ (1/√(3), 1), we have
S(, p, r)=(β, p-, √(1-β^2)r)=:(t,x_1, τ).
Then by using (<ref>), we obtain from (<ref>) that
(_+i_r)b_0'=1/2[-i√(1-β^2) H(√(1-β^2)r)+ βa_ζ'(, p, r, 0)]b_0',
where a_ζ'=a_ζ∘ S and b_0'=b_0∘ S.
Writing
= 1/2(_+i_r),
we look for a solution to (<ref>) of the form b_0'(, p, r; ζ)=e^Φ_1, ζ(, p, r)+f_1(r). By a direct computation, we see that in order for such a function b_0' to solve (<ref>), the functions Φ_1, ζ and f_1 need to satisfy
Φ_1, ζ(, p, r) = β/4a_ζ'(, p, r, 0)
=
β/4a_ζ'(, p, γ(r))
and
_r f_1= -√(1-β^2)/2 H(√(1-β^2)r).
Note that f_1 can be obtained by integrating the right-hand side of (<ref>) with respect to r.
In order to solve the -equation (<ref>), we use the fundamental solution E(,r)=1/π(+ir) of the -operator <cit.> to take
Φ_1, ζ(, p, r)=β/4 (E∗a_ζ')(, p, γ(r)).
While forming the convolution over the complex variable +ir, we note that by Proposition <ref>, the function a_ζ' is compactly supported in ^2 × M_0^int. Since γ is a non-tangential geodesic in (M_0,g_0), we may assume without loss of generality that the map (,p, r) ↦a_ζ'(, p, γ(r)) is smooth and compactly supported in the entire (,p, r)-space so that estimate (<ref>) still holds.
Therefore, we have obtained a C^∞-smooth solution b_0'(, p, r; ζ)=e^Φ_1, ζ(, p, r)+f_1(r) of (<ref>) defined in the whole (, p, r)-space.
To verify that b_0 satisfies (<ref>), we need to estimate b_0(·; ζ) and its first and second order derivatives over the set [0,T/β] ×J_p × [r_0-δ, r_0+δ], where J_p⊂ is an open and bounded interval such that the respective p-coordinate of each point in Q is in J_p. Since the function a_ζ is supported in some open and bounded set of ^2× M_0^int, as given in Proposition <ref>, there exists some compact set K ⊂^2 such that for every (, p, r) ∈ [0,T/β] ×J_p × [0, L/√(1-β^2)] it holds that
|Φ_1,ζ(, p, r)|
≤∫_K|E(-t,r-s)||a'_ζ(t, p, s)|dtds
≤E_(,r)_L^1(K)a'_ζ_L^∞,
where E_,r(t,s)=E(-t,r-s). Due to the local integrability of E, the term E_(,r)_L^1(K) has a uniform bound for all (,r)∈ [0,T/β] × [0, L/√(1-β^2)]. Then it follows from estimate (<ref>) that Φ_1,ζ_L^∞=(1). Furthermore, by replacing the function a_ζ with _a_ζ in (<ref>) and utilizing (<ref>) again, we get _Φ_1,ζ_L^∞=o(ζ^-1). We also obtain the following estimates by using similar arguments:
∇_g_0Φ_1,ζ_L^∞, _pΦ_1,ζ_L^∞=o(ζ^-1), Δ_g_0Φ_1,ζ_L^∞,_t̃^2 Φ_1,ζ_L^∞=o(ζ^-2), ζ→ 0.
To complete the verification of (<ref>), we connect the semiclassical parameter h and the regularization parameter ζ by setting ζ=h^α, 0<α<1/2. Note that the change of coordinates function S, given by (<ref>), is independent of ζ. Hence, this choice of ζ, in conjunction with estimates
(<ref>), (<ref>), and f_1_L^∞=(1), yields
b_0(·; h)_L^∞=(1),
∇_g_0 b_0(·; h)_L^∞,_t b_0(·; h)_L^∞, _x_1 b_0(·; h)_L^∞=o(h^-α),
Δ_g_0 b_0(·; h)_L^∞,_t^2 b_0(·; h)_L^∞=o(h^-2α), h→ 0.
By substituting (<ref>)–(<ref>) into the left-hand side of (<ref>), we get from (<ref>) that
2_x_1b_0-2β_tb_0 - 2i⟨∇_g_0Θ, ∇_g_0b_0(x_1, ·)⟩_g_0 - i(Δ_g_0Θ)b_0 + βa_ζ b_0
= -2iH(τ)y·_yb_0+(|y|^2)_τ b_0+𝒪(|y|^2)_y b_0+(|y|)+(|y|h^-α).
= (|y|h^-α).
Thus, the equation (<ref>) is verified.
Let us now verify that the estimates in (<ref>) hold for the quasimode
v_s(, p, r, y;h)=e^isΘ(√(1-β^2)r, y)b'(, p, r,y; h)=e^isΘ(√(1-β^2)r, y)h^-n-2/4b_0'(, p, r; h)χ(y/δ')
in the open set (0,T/β) × J_p ×Ω of Q, where Ω⊂ M_0 is the domain of Fermi coordinates near the point z_0=γ(τ_0).
To establish this, we shall need the following estimate for any k∈:
h^-n-2/4 |y|^k e^-Θ/h_L^2(|y| ≤δ'/2) ≤h^-n-2/4 |y|^k e^-d/h|y|^2_L^2(|y| ≤δ'/2)
≤(∫_^n-2 h^k|z|^2k e^-2d|z|^2dz)^1/2
=(h^k/2), h → 0.
Here we applied estimate (<ref>) and the change of variable z=h^-1/2y.
We are now ready to start verifying (<ref>) locally. To that end, we use (<ref>), (<ref>), and (<ref>) with k=0 to get
v_s_L^2([0,T/β]×J_p ×Ω) ≤b_0'_L^∞([0, T/β]×J_p× [r_0-δ, r_0+δ])e^isΘh^-n-2/4χ(y/δ')_L^2([0, T/β]×J_p ×Ω)
≤𝒪(1)h^-n-2/4e^-d/h |y|^2_L^2(|y|≤δ'/2)=𝒪(1), h→ 0.
Let us next estimate _tv_s_L^2([0, T]×J_p ×Ω). Using (<ref>), (<ref>), and
ζ=h^α, 0<α <1/2, we obtain
_tv_s_L^2([0, T/β]×J_p×Ω)=o(h^-1/2), h→ 0.
We now proceed to estimate e^s(β t+x_1)ℒ_g,-a,q-_tae^-s(β t+x_1)v_s_L^2([0, T/β]× J_p×Ω). Let us start with the first term on the right-hand side of (<ref>). Using (<ref>), (<ref>), (<ref>), and (<ref>) with k=3, we obtain
h^2e^isΘs^2 (⟨∇_g_0Θ, ∇_g_0Θ⟩_g_0 - (1 - β^2))b_L^2([0, T/β]×J_p ×Ω)
=h^2e^isΘs^2 h^-n-2/4(⟨∇_g_0Θ, ∇_g_0Θ⟩_g_0 - (1 - β^2))b_0'χ(y/δ')_L^2([0, T/β]×J_p ×Ω)
≤(1) h^-n-2/4 |y|^3 e^-d/h |y|^2_L^2(|y| ≤δ'/2) =(h^3/2), h → 0.
We next consider the second term on the right-hand side of (<ref>). From a direct computation, we see that
|e^isΘ|=e^-1/hΘe^-λΘ=e^-√(1-β^2)/2h H(τ)y· ye^-λ√(1-β^2)τe^-λ(|y|^2).
We observe that e^-1/h=(h^∞). Therefore, on the support of ∇_g_0χ(y/δ') we deduce from (<ref>) that
|e^isΘ|
≤
e^-d̃/h for some d̃>0.
Thus, using estimates (<ref>) and (<ref>) with k=1, in conjunction with the triangle inequality, we have
h^2 e^isΘs(2_x_1b-2β_tb - 2i⟨∇_g_0Θ, ∇_g_0b(x_1, ·)⟩_g_0 - i(Δ_g_0Θ)b+ βa_ζ b)_L^2([0, T/β]×J_p ×Ω)
≤(h) e^isΘh^-n-2/4 [|y|h^-αχ(y/δ')-2i∇_g_0Θ, ∇_g_0χ(y/δ')_g_0]_L^2([0, T/β]×J_p ×Ω)
≤(h) h^-n-2/4 |y| h^-α e^-d/h|y|^2_L^2(|y| ≤δ'/2) +(e^-d̃/h)
=(h^3/2-α)=o(h), h → 0.
We want to emphasize that in the second term on the right-hand side of (<ref>) we have a̅ instead of its smooth approximation a_ζ, which appeared in (<ref>). To medicate this discrepancy, we use estimates (<ref>) and (<ref>) with k=0 to get
h^2 e^isΘs β (a-a_ζ) b_L^2([0, T/β]×J_p ×Ω) = (h) e^isΘ (a-a_ζ) h^-n-2/4b_0'χ(y/δ')_L^2([0, T/β]×J_p ×Ω)
≤(h)a-a_ζ_L^∞([0, T/β]×J_p ×Ω)h^-n-2/4 e^-d/h |y|^2_L^2(|y| ≤δ'/2)
=o(h), h → 0.
Finally, we estimate the third term on the right-hand side of (<ref>). To that end, we utilize estimates (<ref>) and (<ref>) with k=0 to obtiain
h^2 e^isΘ_t^2b_L^2([0, T/β]×J_p ×Ω)=o(h^2(1-α))=o(h), h→ 0.
To estimate the term involving the Δ_g, we incorporate estimates (<ref>), (<ref>), and (<ref>) with k=0, as well as the triangle inequality to get
h^2 e^isΘ(-Δ_gb)_L^2([0, T/β]×J_p ×Ω)
≤ (h^2)h^-n-2/4 e^isΘχ(y/δ')Δ_g b_0'_L^2([0, T/β]×J_p ×Ω)
+(h^2)h^-n-2/4 e^isΘ [b_0' Δ_g χ(y/δ')+2∇_g b_0', ∇_g χ(y/δ')_g]_L^2([0, T/β]×J_p ×Ω)
≤ (h^2)(h^-n-2/4 e^-d/h |y|^2h^-2α_L^2(|y| ≤δ'/2) + (e^-d̃/h))
= (h^2(1-α))+(e^-d̃/h)=o(h), h→ 0.
For the lower order terms, we obtain from (<ref>) that
h^2 e^isΘ(-a_tb+(q-_ta)b)_L^2([0, T/β]×J_p ×Ω)=o(h^2-α)=o(h^3/2), h→ 0.
Therefore, by combining estimates (<ref>)–(<ref>), we conclude from (<ref>) that
e^s(β t+x_1)h^2ℒ_g, -a, q-_tae^-s(β t+x_1)v_s_L^2([0, T/β]×J_p ×Ω)=o(h), h→ 0.
This completes the verification of (<ref>) locally in the set (0, T/β)× J_p×Ω.
Before proceeding to the global construction, we need an estimate for v(t, x_1, ·)_L^2(_0) for later purposes. If Ω contains a boundary point x_0=(τ_0, 0) ∈ M_0, then γ̇(τ_0) is transversal to M_0. Let ρ be a boundary defining function for M_0 so that M_0 is given by the level set ρ(r, y)=0 near x_0, and ∇_g_0ρ is normal to M_0 at x_0. These imply that _τρ(x_0) 0. By the implicit function theorem, there exists a smooth function y↦ r(y) near 0 such that M_0 near x_0 is given by {(r(y), y):|y|<r_0} for some r_0>0 small, see the proof of <cit.>.
Using (<ref>), (<ref>), and (<ref>), we see that there exists a constant C such that
|v_s(t, x_1, τ, y;h)|≤ Ch^-n-2/4e^-d/h |y|^2χ(y/δ').
Thus, after shrinking the set Ω if necessary and using (<ref>) with k=0 along with (<ref>), we get
v(t, x_1, ·)^2_L^2(_0 ∩Ω) = ∫_|y|_e<r_0 |v(t, x_1, r(y), y)|^2dS_g(y)
≤(1) ∫_^n-2 h^-n-2/2e^-2d/h |y|^2 dy =(1), h→ 0.
Finally, we glue together the quasimodes defined along small pieces of the geodesic γ to obtain the quasimode v_s in Q. Since M̂_0 is compact and γ(r):(-2ε, L/√(1-β^2)+2ε)→M̂_0 is a unit speed non-tangential geodesic that is not a loop, <cit.> shows that γ|_(-2ε, L/√(1-β^2)+2ε) self-intersects only at finitely many times r_j with
-ε =r_0< r_1< …<r_N<r_N+1 =L/√(1-β^2)+ε.
It follows from <cit.> that there exists an open cover {(Ω_j, κ_j)_j=0^N+1} of γ([-ε, L/√(1-β^2)+ε]) consisting of coordinate neighborhoods that have the following properties:
(1)κ_j(Ω_j)=I_j× B, where I_j are open intervals and B=B(0, δ') is an open ball in ^n-2. Here δ'>0 can be taken arbitrarily small and same for each Ω_j.
(2)κ_j(γ(r))=(r, 0) for r ∈ I_j.
(3)r_j only belongs to I_j and I_j∩I_k=∅ unless |j-k|≤ 1.
(4)κ_j= κ_k on κ_j^-1((I_j∩ I_k)× B).
As explained in <cit.>, the intervals I_j, r, in r variable, can be chosen as
I_0,r=(-2ε, r_1-δ̃), I_j,r=(r_j-2δ̃, r_j+1-δ̃), j=1, …, N,
I_N+1, r=(r_N+1-2δ̃, L/√(1-β^2)+2ε)
for some δ̃>0 small enough. In the case when γ does not self-intersect, there is a single coordinate neighborhood of γ|_[-ε, L+ε] such that (1) and (2) are satisfied.
We proceed as follows to construct the quasimode v_s. Suppose first that γ does not self-intersect at r=0. Using the procedure from the earlier part of this proof, we find a quasimode
v_s^(0)(, p, r, y;h)=h^-n-2/4e^isΘ^(0)(√(1-β^2)r, y)e^Φ_1, h(, p, r)+f_1(r)χ(y/δ')
in Ω_0 with some fixed initial conditions at r=-ε for the Riccati equation (<ref>) determining Θ^(0). We now choose some r_0' such that γ(r_0')∈Ω_0∩Ω_1, and let
v_s^(1)(, p, r, y;h)=h^-n-2/4e^isΘ^(1)(√(1-β^2)r, y)e^Φ_1,h(, p, r)+f_1(r)χ(y/δ')
be the quasimode in Ω_1 by choosing the initial conditions for (<ref>) such that Θ^(1)(r_0')= Θ^(0)(r_0').
Here we have used the same functions Φ_1,h and f in v_s^(0) and v_s^(1) since Φ_1,h and f are both globally defined for all r∈ (-2ε, L/√(1-β^2)+2ε), and neither of the functions depends on y. On the other hand, since the equations determining the phase functions Θ^(0) and Θ^(1) have the same initial data in Ω_0 and in Ω_1, and the local coordinates κ_0 and κ_1 coincide on κ_0^-1((I_0∩ I_1)× B), we have Θ^(1)(t̃, p, ·)=Θ^(0)(t̃, p, ·) in I_0∩ I_1. Therefore, we conclude that v_s^(0)=v_s^(1) in the overlapped region Ω_0∩Ω_1. Continuing in this way, we obtain quasimodes v_s^(2), …, v_s^(N+1) such that
v_s^(j)(t̃, p, ·)=v_s^(j+1)(t̃, p, ·) in Ω_j∩Ω_j+1
for all and p. If γ self-intersects at r=0, we start the construction from v^(1) by fixing initial conditions for (<ref>) at r=0 and find v^(0) by going backwards.
Let χ_j be a partition of unity subordinate to (Ω_j)_j=1^N+1 and define
v_s=∑_j=0^N+1χ̃_̃j̃v_s^(j).
Then v_s∈ C^∞(Q) and is supported in a small neighborhood of γ([-ε, L+ε]).
Let z_1, …, z_R∈ M_0 be distinct self-intersection points of γ, and let 0≤ r_1<⋯<r_R be the times of self-intersections. Let V_j be a small neighborhood in M̂_0 centered at z_j, j=1, …, R. As explained in <cit.>, for δ' sufficiently small, we can pick a finite cover W_1, …, W_S of remaining points on the geodesic such that W_k⊂Ω_l(k) for some index l(k) and
(v_s(, p, ·))∩ M_0 ⊂ (∪_j=1^RV_j) ∪ (∪_k=1^SW_k).
Moreover, the quasimode restricted on V_j and W_k is of the form
v_s(, p,·)|_V_j=∑_l:r(r_l)=z_jv_s^(l)(, p,·)
and
v_s(, p, ·)|_W_k=v_s^l(k)(, p, ·),
respectively. Since v_s is a finite sum of v^(l) in each case, the estimate v_s(, p,·)_L^2( M_0)=(1) and those in (<ref>) follow from corresponding local considerations (<ref>), (<ref>), and (<ref>) for each of v_s^(l), respectively.
We next seek a Gaussian beam quasimode for the operator e^-s(β t+x_1)ℒ_g, a,qe^s(β t+x_1) of the form
w_s=e^isΘB
with the phase function Θ∈ C^∞(Ω, ) that satisfies (<ref>) and B (t, x_1, τ, y)∈ C^∞(××Ω) supported near Γ.
We replace s in (<ref>) and (<ref>) by -s and recall that Θ is independent of x_1 to obtain
e^-s(β t+x_1)ℒ_g, a,qe^s(β t+x_1)w_s
= e^isΘ[e^-isΘe^-s(β t+x_1)ℒ_g, a,qe^s(β t+x_1) e^isΘB]
= e^isΘ[s^2(⟨ dΘ, dΘ⟩_g_0-(1-β^2))B
+s(-2_x_1B+2β_tB-2i⟨ dΘ, dB(x_1, ·)⟩_g_0 -i(Δ_g_0Θ)B + β a B)
+ℒ_g,a,qB].
We next find the amplitude B in the form of
B(t, x_1, τ, y;h)=h^-n-2/4B_0(t, x_1, τ;h) χ(y/δ'),
where B_0∈ C^∞([××{τ:|τ-τ_0|< δ}). To that end, by proceeding similarly as the construction of b_0, we require that B_0 solves
(β_t-_x_1-i√(1-β^2)_τ) B_0 =1/2[(i√(1-β^2) H(τ)-β a(t, x_1, τ, 0)] B_0
Using change of coordinates (<ref>) again, we get
(_-i_r) B_0'=1/2[i√(1-β^2) H(√(1-β^2)r)-β a_ζ' (, p, r, 0)] B_0',
where B_0'=B_0∘ S and a_ζ'=a_ζ∘ S.
By writing =1/2(_-i_r)
and looking for a solution of the form B_0=e^Φ_2(, p, r)+f_2(r)η(, p, r) with η=0, we see that the functions Φ_2,ζ and f_2 must satisfy
Φ_2,ζ=-1/4β a_ζ'(, p,γ(r))
and
_rf_2 = -√(1-β^2)/2 H(√(1-β^2)r).
Using similar arguments as in the construction of v_s, we obtain a Guassian beam quasimode w_s∈ C^∞(Q) such that the estimates in (<ref>) holds. This completes the proof of Proposition <ref>.
By the proof of Proposition <ref>, for each non-tangential geodesic γ[0, L/√(1-β^2)]→ M_0 and h>0 there exist smooth functions Φ_1,h,Φ_2,h in [0,T/β] ×J_p × [0, L/√(1-β^2)] satisfying
(_+i_r) Φ_1,h(, p, r) = 1/2βa_h'(, p, γ(r)) and
(_-i_r) Φ_2,h(, p, r) = -1/2β a'_h(, p, γ(r)),
where a_h'=a_h∘ S, and the change of coordinates S is given by (<ref>). Here J_p⊂ is an open and bounded interval such that for each point in Q the respective p-coordinate is in J_p. In the next lemma we study the behavior for these functions as h → 0.
Let β∈ (1/√(3),1), and let γ[0, L/√(1-β^2)]→ M_0 be a non-tangential geodesic in (M_0,g_0) as in Proposition <ref>. Then there exist continuous functions Φ_1 and Φ_2 in [0,T/β] ×J_p × [0, L/√(1-β^2)] that satisfy
(_+i_r) Φ_1(, p, r) = 1/2βa'(, p, γ(r)) and
(_-i_r) Φ_2(, p, r) = -1/2β a'(, p, γ(r)),
respectively. Furthermore, the following estimate holds
Φ_j, h-Φ_j_L^∞([0,T/β] ×J_p × [0, L/√(1-β^2)])=o(1), j=1,2, h → 0.
With a slight abuse of notation, we consider the compactly supported function
a̅ '(,p,r) =a'(, p, γ(r)) in ^3 and define a continuous function
Φ_1(, p, r)=β/4 (E∗a')(, p, r).
Here the convolution is taken over the complex variable +ir.
Since E:=1/π(+ir) is the fundamental solution for the -operator, we see that
Φ_1 = 1/4βa'.
Lastly, estimate (<ref>) follows from the local integrability of E, estimate (<ref>), and an inequality analogous to (<ref>). The analogous claims for j=2 follow by the same arguments. This completes the proof of Lemma <ref>.
We want the Gaussian beam quasimodes to concentrate along the geodesic as h→ 0. This is formalized in the following result.
Let s=1/h+iλ, 0<h≪ 1, λ∈ fixed, and β∈ (1/√(3),1). Let γ[0, L/√(1-β^2)]→ M_0 be a non-tangential geodesic in (M_0,g_0) as in Proposition <ref>. Let J_p be as above.
Let v_s and w_s be the quasimodes from Proposition <ref>. Then for each ψ∈ C(M_0) and (', p')∈ [0,T/β]× [0, L/√(1-β^2)] we have
lim_h→ 0∫_{'}×{p'}× M_0v_sw_sψ dV_g_0 = (1-β^2)^-n-6/4 ∫_0^L/√(1-β^2) e^-2(1-β^2)λ re^Φ_1(', p', r)+Φ_2(', p', r)
η(', p',r)ψ(γ(r))dr.
Here the functions Φ_1, Φ_2∈ C( [0,T/β] ×J_p × [0, L/√(1-β^2)]) are as in Lemma <ref>,
and η∈ C^∞( [0,T/β] ×J_p × [0, L/√(1-β^2)]) with (_-i_r)η=0.
By a partition of unity, it suffices to verify (<ref>) for ψ∈ C_0(V_j∩ M_0) and ψ∈ C_0(W_k∩ M_0), where V_j and W_k are same as in the proof of Proposition <ref>.
We first consider the easier case that ψ∈ C_0(W_k∩ M_0) for some k. Here ψ may extend to M_0, and we extend ψ by zero outside W_k∩ M_0. Arguing similarly as in the proof of Proposition <ref>, we obtain Gaussian beam quasimodes
v_s(, p, r, y)=e^isΘ(√(1-β^2)r, y)h^-n-2/4e^Φ_1,h(, p, r)+f_1(r)χ(y/δ'),
w_s(, p, r, y)=e^isΘ(√(1-β^2)r, y)h^-n-2/4e^Φ_2,h(, p, r)+f_2(r)η(t̃,p, r)χ(y/δ').
Using (<ref>), we get
|g_0(r, y)|^1/2=√(1-β^2)+(|y|^2).
It then follows from (<ref>), (<ref>), (<ref>), and (<ref>) that
∫_{'}×{p'}× M_0v_sw_sψ dV_g_0
= √(1-β^2)∫_0^L/√(1-β^2)∫_^n-2e^-2/hΘ e^-2λΘ h^-n-2/2 e^Φ_1,h(', p', r)+Φ_2,h(', p', r)+f_1(r)+f_2(r)
χ^2(y/δ')η(, p, r)ψ(r, y)|g_0|^1/2dydr
= √(1-β^2)∫_0^L/√(1-β^2)∫_^n-2e^-1/h√(1-β^2) H(√(1-β^2)r)y· y e^-2λ (1-β^2)r e^λ𝒪(|y|^2)h^-n-2/2χ^2(y/δ')η(', p', r)
e^Φ_1,h(', p', r)+Φ_2,h(', p', r)+f_1(r)+f_2(r)ψ(r,y)(√(1-β^2)+𝒪(|y|^2))dydr
= (1-β^2)∫_0^L/√(1-β^2)∫_^n-2e^-√(1-β^2) H(√(1-β^2)r)y· y e^-2(1-β^2)λ r e^hλ𝒪(|y|^2)χ^2(h^1/2y/δ')
e^Φ_1,h(', p', r)+Φ_2,h(', p', r)+f_1(r)+f_2(r)ψ(r,h^1/2y)(1+h𝒪(|y|^2))η(', p', r)dydr,
where we have performed a change of variables y↦ h^1/2y in the last step.
Passing to the limit h→ 0 in (<ref>), we get the following pointwise limits
e^hλ𝒪(|y|)^2→ 1, χ^2(h^1/2y/δ')→ 1, ψ(r, h^1/2y)→ψ(r, 0)=ψ(γ(r)), Φ_i,h→Φ_i,
where we used (<ref>) in the last one. We recall that
Θ= 1/2√(1-β^2) H(√(1-β^2)r)y· y≥ d|y|^2 and ∫_^n-2e^-d|y|^2dy<∞.
Hence, by the dominated convergence theorem, we get
lim_h→ 0∫_{'}×{p'}× M_0v_sw_sψ dV_g_0
= (1-β^2) ∫_0^L/√(1-β^2) e^f_1(r)+f_2(r)(∫_^n-2 e^-√(1-β^2) H(√(1-β^2)r)y· ydy)
η(', p', r)e^-2(1-β^2)λ re^Φ_1(', p', r)+Φ_2(', p', r)ψ(γ(r)) dr.
To simplify the expression on the right-hand side of (<ref>), we perform a change of variable y↦ (1-β^2)^-1/4y to obtain
∫_^n-2 e^-√(1-β^2) H(√(1-β^2)r)y· ydy = ∫_^n-2 (1-β^2)^-n-2/4e^- H( √(1-β^2)r)y· y dy
= π^n-2/2(1-β^2)^-n-2/4/√(( H( √(1-β^2)r))).
We set r_0=√(1-β^2)τ_0 and recall from <cit.> that
( H( √(1-β^2)r)) = ( H( √(1-β^2)r_0)) e^-2 ∫_r_0^r√(1-β^2)tr H( √(1-β^2)w)dw.
This implies
∫_^n-2 e^-√(1-β^2) H(√(1-β^2)r)y· ydy=π^n-2/2(1-β^2)^-n-2/4e^∫_r_0^r√(1-β^2)tr(H( √(1-β^2)w))dw/√(( H(√(1-β^2)r_0))).
Furthermore, since
_r f_j(r)= -1/2√(1-β^2) H(√(1-β^2)r), j=1, 2,
we get
_r (f_1(r)+f_2(r)) =- √(1-β^2) H(√(1-β^2)r).
Thus, by the fundamental theorem of calculus, we have
f_1(r)+f_2(r)=f_1(r_0)+f_2(r_0)-∫_r_0^r√(1-β^2)tr(H(√(1-β^2)w))dw.
We next choose f_1(r_0) and f_2(r_0) so that
e^f_1(r_0)+f_2(r_0)π^n-2/2/√(( H(√(1-β^2)r_0)))=1.
Thus, it follows from (<ref>), (<ref>), and (<ref>) that
e^f_1(r)+f_2(r)(∫_^n-2 e^- H(√(1-β^2)r)x· xdx)=(1-β^2)^-n-2/4.
Finally, we obtain (<ref>) for ψ∈ C_0(W_k∩ M_0) by substituting (<ref>) into (<ref>).
Let us now verify (<ref>) when ψ∈ C_0(V_j∩ M_0) for some j. In this case, we have
v_s=∑_l:γ(r_l)=z_j v_s^(l), w_s=∑_l:γ(r_l)=z_j w_s^(l)
on (ψ), thus
v_sw_s= ∑_l:γ(r_l)=z_jv_s^(l)w_s^(l)+∑_l l':γ(r_l)=γ(r_l')=z_jv_s^(l)w_s^(l').
Following similar arguments as in <cit.>, our aim is to prove that the contribution of the cross terms vanish in the limit h→ 0. More precisely,
lim_h→ 0∫_{'}×{p'}× M_0v_s^(l)w_s^(l')ψ dV_g_0=0, l l'.
If so, then the limit (<ref>) follows from the first part of this proof. To that end, let us write
v_s^(l)=e^i/hΘ^(l)ϕ^(l), ϕ^(l)=e^-λΘ^(l)e^-sΘ^(l)b^(l),
and
w_s^(l')=e^i/hΘ^(l')ω^(l'), ω^(l')=e^-λΘ^(l')e^-sΘ^(l')B^(l'),
which imply that
v_s^(l)w_s^(l')= e^i/hσϕ^(l)ω^(l'), σ= Θ^(l') - Θ^(l).
Hence, in view of (<ref>), we need to show that
lim_h→ 0∫_{'}×{p'}× M_0 e^i/hσϕ^(l)ω^(l')ψ dV_g_0=0, l l'.
To prove (<ref>), we follow the same arguments as in the proof of <cit.>. Let us write ψ=ψ_h+(ψ-ψ_h), where the regularization ψ_h ∈ C^∞_0(V_j∩ M_0), but its support can meet M_0, and ψ-ψ_h is continuous. We recall that due to the estimates in (<ref>) and (<ref>), we have ϕ^(l)_L^2(V_j∩ M_0),ω^(l')_L^2(V_j∩ M_0)=(1). Thus, by Hölder's inequality and estimate (<ref>), we obtain
|∫_{'}×{p'}× M_0e^i/hσϕ^(l)ω^(l')(ψ-ψ_h) dV_g_0|≤v_s^(l)_L^2w_s^(l')_L^2ψ-ψ_h_L^∞ =o(1), h → 0,
where the L^p-norms are taken over the set V_j∩ M_0.
To analyze the term involving the smooth part ψ_h, we note that by (<ref>) and (<ref>), the gradients of Θ^(l) and Θ^(l') at z_j are parallel to γ̇(r_l) and γ̇(r_l'), respectively. Since the geodesic γ intersects itself transversally at z_j, we have ∇_g_0σ(z_j) 0. Therefore, by shrinking the set V_j if necessary, we may assume that σ has no critical points in V_j.
Thus, the vector field L=ϕ^(l)ω^(l')ψ_h/|∇_g_0σ|^2∇_g_0σ is well-defined and satisfies
e^i/hσϕ^(l)ω^(l')ψ_h
=
-ihL(e^i/hσ).
Therefore,
we integrate by parts in (<ref>) to obtain
∫_{'}×{p'}× M_0e^i/hσϕ^(l)ω^(l')ψ_hdV_g_0
=
-ih∫_{'}×{p'}× (V_j∩ M_0)_νσ/ |∇_g_0σ|^2 e^i/hσϕ^(l)ω^(l')ψ_hdS_g_0
+ih ∫_{'}×{p'}× M_0 e^i/hσ
÷(L) dV_g_0,
To show that the boundary term on the right-hand side of (<ref>) vanishes as h→ 0, we use estimate (<ref>) to observe that ϕ^(l)_L^2( M_0), ω^(l')_L^2( M_0)=(1). Furthermore, σ is real valued and independent of h. Then
by estimate (<ref>) and Hölder's inequality, we have
-ih∫_{'}×{p'}× (V_j∩ M_0) _νσ/|∇_g_0σ|^2 e^i/hσϕ^(l)ω^(l')ψ_hdS_g_0
=
(h), h → 0.
To prove that the second term on the right-hand side of (<ref>) vanishes as h→ 0, we first compute that
÷ L
=÷(ϕ^(l)ω^(l')ψ_h ∇_g_0σ/|∇_g_0σ|^2)
=⟨∇_g_0(ϕ^(l)ω^(l')ψ_h), ∇_g_0σ/|∇_g_0σ|^2⟩_g_0
+
ϕ^(l)ω^(l')ψ_h ÷(∇_g_0σ/|∇_g_0σ|^2).
Using estimates (<ref>), ϕ^(l)_L^2(M_0),ω^(l')_L^2(M_0)=(1), and Hölder's inequality, we get
ih ∫_{'}×{p'}× M_0 e^i/hσϕ^(l)ω^(l')ψ_h ÷(∇_g_0σ/|∇_g_0σ|^2) dV_g_0
=(h), h → 0.
Next, we write
ϕ^(l)ω^(l')ψ_h
=[e^-λ(Θ^(l)+Θ^(l'))e^-iλ(Θ^(l')-Θ^(l))][e^-1/h(Θ^(l)+Θ^(l'))h^-n-2/2][b_0^(l)B_0^(l')χ^2(y/δ')]ψ_h
=f_1f_2f_3ψ_h.
To streamline the proof, we only provide the estimate for the worst case scenario, which occurs when ∇_g_0 acts on f_2=e^-1/h(Θ^(l)+Θ^(l'))h^-n-2/2. To that end, due to (<ref>), there exists C>0 such that by Cauchy-Schwarz inequality, as well as estimates (<ref>) and (<ref>) with k=1/2, we have
h∫_{'}×{p'}× M_0
|e^i/hσ
f_1f_2ψ_h ⟨∇_g_0f_2, ∇_g_0σ/|∇_g_0σ|^2⟩|
dV_g_0
≤
Cψ_h_L^∞ϕ^(l)_L^2ω^(l')_L^2∫_{'}×{p'}× M_0
h^-n-2/2|y|e^-d/h |y|^2
dV_g_0
=
(h^1/2), h → 0.
Hence, we conclude that the second term in the right-hand side of (<ref>) is of order (h^1/2), thus the limit (<ref>) is verified. This completes the proof of Proposition <ref>.
§.§ Construction of CGO solutions
We now proceed to construct CGO solutions of the forms (<ref>) and (<ref>). Thanks to the interior Carleman estimate established in Proposition <ref>, we can put the ingredients together in a simple way. Let (M, g) be a CTA manifold given by Definition <ref>. We already computed in Section <ref> that
c^n+2/4∘ℒ_c,g,a,q∘ = ℒ_g̃, ã, q̃,
where g̃=e⊕ g_0, ã=ca, and q̃=c(q-c^n-2/4Δ_g()). This implies that u=ũ satisfies ℒ_c,g,a,q u=0 if ũ solves ℒ_g̃, ã, q̃ũ=0 in Q.
Let us write (t,x)=(t,x_1, x') for local coordinates in Q and recall that
s=1/h+iλ, 0<h≪ 1, where λ∈ is fixed. We are interested in finding CGO solutions to the equation
ℒ_g̃, ã, q̃ũ=0 in Q,
of the form
ũ=e^-s(β t+x_1)(v_s+r),
where v_s is the Gaussian beam quasimode given in Proposition <ref>, and r=r_s is a correction term that vanishes in the limit h→ 0. Indeed, ũ is a solution to (<ref>) if
e^s(β t+x_1)h^2ℒ_g̃, ã, q̃e^-s(β t+x_1)r=-e^s(β t+x_1)h^2ℒ_g̃, ã, q̃e^-s(β t+x_1)v_s.
Then we apply Proposition <ref> with v=-e^s(β t+x_1)h^2ℒ_g̃, ã, q̃e^-s(β t+x_1)v_s and estimate (<ref>) to conclude that there exists r∈ H^1(Q^int) such that (<ref>) holds and r_H^1_(Q)= o(1) as h→ 0.
We summarize our discussion above in the following proposition. In particular, we have general conformal factor c in this result instead of c=1, which we assumed in all of the earlier results in this paper.
Let a ∈ W^1,∞(Q) and q ∈ C(Q). Let s=1/h+iλ with λ∈ fixed. For all h>0 small enough, there exists a solution u_1∈ H^1(Q) to ℒ_c,g, a, q^*u_1=0 of the form
u_1=e^-s(β t+x_1) (v_s+r_1),
where v_s∈ C^∞(Q) is the Gaussian beam quasimode given in Proposition <ref>, and r_1∈ H^1_(Q^int) is such that r_1_H^1_(Q^int)= o(1) as h→ 0.
There also exists a solution u_2∈ H^1(Q) to ℒ_c,g, a, qu_2=0 that has the form
u_2=e^s(β t+x_1) (w_s+r_2),
where w_s∈ C^∞(Q) is the Gaussian beam quasimode given in Proposition <ref>, and r_2∈ H^1_(Q^int) is such that r_2_H^1_scl(Q^int)= o(1) as h→ 0.
§ PROOF OF THEOREM <REF>
Let u_1∈ H^1(Q) be an exponentially decaying CGO solution given by (<ref>) satisfying
ℒ_c,g,a_1,q_1^* u_1 = 0 in Q, and let u_2∈ H^1(Q) be an exponentially growing CGO solution given by (<ref>) satisfying ℒ_c,g,a_2,q_2 u_2 = 0 in Q. Due to the main assumption 𝒞_g,a_1,q_1 = 𝒞_g,a_2,q_2, it follows from Proposition <ref>
that there exists v∈ H__c,g(Q) such that ℒ_c,g,a_1,q_1 v = 0 and
(u_2 - v)|_Σ=(u_2 - v)|_t=0= (u_2 - v)|_t=T= _t(u_2 - v)|_t=0=_ν (u_2 - v)|_V = 0.
Then u : = u_2 - v ∈ H__c,g(Q) solves the equation
ℒ_c,g, a_1, q_1u=a_tu_2+qu_2 in Q,
u|_Σ=u|_t=0= u|_t=T= _tu|_t=0=_ν u|_V = 0.
Here and in what follows we denote a:=a_1-a_2 and q:=q_1-q_2. Since
a_tu_2+qu_2∈ L^2(Q), it follows from <cit.> that
u∈ C^1([0,T];L^2(M))∩ C([0,T];H^1_0(M))⊂ H^1(Q), with _ν u∈ L^2(Σ).
Since u_1∈ H^1(Q) and _c,gu_1∈ L^2(Q), we see that
(c^-1_t u_1, -∇_g u_1)∈ H_÷(Q):={F∈ L^2(Q, TQ): ÷_(t,x)F∈ L^2(Q)}.
We view Q as a compact Riemannian manifold of dimension n+1 with the metric g =dt^2⊕ g,
and let ν̅ be the outward unit normal vector to Q.
In view of <cit.>, for F∈ H_÷(Q), F·ν̅|_ Q can be defined as an element of H^-1/2( Q), and for ψ∈ H^1(Q) we have
⟨ψ, F·ν̅⟩_H^1/2( Q),H^-1/2( Q) = ⟨ψ, ÷_g(F)⟩_L^2(Q) + ⟨∇_gψ,F⟩_L^2(Q).
By taking ψ = u and F=(c^-1_t u_1, -∇_g u_1), we deduce that
⟨ u, (c^-1_t u_1, -∇_g u_1)·ν̅⟩_H^1/2( Q), H^-1/2( Q)
= ⟨ u, _c,gu_1⟩_L^2(Q)
+⟨ (_tu,∇_g u), (c^-1_tu_1,-∇_gu_1) ⟩_L^2(Q).
Arguing similarly and using _tu|_t=0, _tu|_t=T∈ L^2(M), _ν u∈ L^2(Σ), and u_1∈ H^1(Q), we get
⟨ (c^-1_tu,-∇_gu)·ν̅, u_1⟩_L^2( Q)
=⟨_c,gu,u_1⟩_L^2(Q) + ⟨ (c^-1_tu, -∇_g u),(_tu_1, ∇_gu_1) ⟩_L^2(Q).
Since a ∈ W^1,∞(Q) and u, u_1∈ H^1(Q), it follows from the proof <cit.> that au_1∈ H^1(Q) and _t(au_1)=_ta u_1+a_tu_1 in the weak sense. Therefore, the following integration by parts is justified:
∫_Q (a_1u_1)_t udV_gdt
= -∫_Q _t(a_1u_1)udV_gdt
=-∫_Q ( _ta_1u_1+ a_1_tu_1)udV_gdt.
In particular, there are no boundary terms since u|_t=0=u|_t=T=0.
We then multiply equation (<ref>) by u_1 and integrate over Q. Therefore, we deduce from (<ref>)–(<ref>) that
∫_Q (a_tu_2+qu_2)u_1dV_gdt
=⟨ℒ_c,g, a_1,q_1u,u_1⟩_L^2(Q) - ⟨ u,ℒ_c,g,a_1,q_1^*u_1⟩_L^2(Q)
=∫_0^T∫_M (_c,gu+a_1_tu+q_1u)u_1dV_gdt-∫_0^T∫_M u(_c,gu_1-a_1_tu_1+q_1u_1-_ta_1u_1)dV_gdt
=⟨ (c^-1_tu,-∇_gu)·ν̅, u_1⟩_L^2( Q) - ⟨ u,(c^-1_tu_1,-∇_gu_1)·ν̅⟩_H^1/2( Q), H^-1/2( Q).
Since u|_Σ=u|_t=0= u|_t=T= 0, the second term on the right-hand side of (<ref>) vanishes. Furthermore, since _ν u∈ L^2(Σ) with _tu|_t=0=_ν u|_V = 0, we obtain the integral identity
∫_Q (a_tu_2+qu_2)u_1dV_gdt=-∫_Σ∖ V_ν uu_1dS_g dt
+
∫_M c^-1_t u(T, x)u_1(T, x)dV_g.
We shall next substitute the CGO solutions (<ref>) and (<ref>) into (<ref>), multiply the equation by h, and pass to the limit h→ 0. In order to analyze the limit of the terms on the left-hand side of (<ref>), we use estimates (<ref>) and (<ref>) to obtain the following estimates for the remainder terms:
r_j_L^2(Q)≤r_j_H^1_(Q^int)=o(1), j = 1,2,
_tr_j_L^2(Q)≤1/hr_j_H^1_(Q^int)=o(h^-1), j= 1,2.
On the other hand, the following lemma explains the behaviors of the two terms on the right-hand side of (<ref>) as h→ 0.
Let u_1 and u be the functions described above.
Then the following estimates hold as h→ 0:
∫_M c^-1_t u(T, x)u_1(T, x)dV_g =
{[ 𝒪(h^-1/2), if a≠ 0 ,; ; 𝒪(h^1/2), if a =0. ].
∫_Σ∖ V_ν uu_1dS_g dt = {[ o(h^-1), if a≠ 0 ,; ; o(1), if a =0. ].
We will postpone the proof of this result and use it to prove the uniqueness of the damping coefficient first.
§.§ Uniqueness of the damping coefficient
From the respective CGO solutions (<ref>) and (<ref>) for u_1 and u_2, we compute that
u_2u_1=e^2iλ (β t+x_1)(v_sw_s+v_sr_2+r_1w_s+r_1r_2).
Therefore, by estimates (<ref>), (<ref>), (<ref>), and the Cauchy-Schwartz inequality, we have
u_2u_1_L^1(Q)≤𝒪(1).
Hence, the following estimate holds
h|∫_Q qu_2u_1dV_gdt|≤ hq_L^∞(Q)u_2u_1_L^1(Q) =𝒪(h), h→ 0.
We next consider the term h ∫_Q a(_tu_2)u_1dV_gdt on the left-hand side of (<ref>). To that end, direct computations yield
(_tu_2)u_1= e^2iλ (β t+x_1)[(v_s_t w_s+v_s_t r_2+r_1_tw_s+r_1_t r_2)
+sβ(v_sw_s+v_sr_2+r_1w_s+r_1r_2)].
Using estimates (<ref>), (<ref>), (<ref>), (<ref>), as well as the Cauchy-Schwartz inequality, we obtain
h|∫_Qe^2iλ(β t+x_1) a(v_s_t w_s+v_s_t r_2 + r_1_tw_s +r_1_t r_2)dV_gdt|=o(1), h→ 0,
and
h|∫_Q e^2iλ (β t+x_1) as β (v_sr_2 + r_1w_s+r_1r_2)dV_gdt|=o(1), h→ 0.
Therefore, we have
h∫_Q a_tu_2u_1dV_gdt→∫_Q e^2iλ (β t+x_1)β av_sw_sdV_gdt, h→ 0.
Using (<ref>), (<ref>), and Lemma <ref>, we deduce from (<ref>) that
∫_Q e^2iλ (β t+x_1)β av_sw_sdV_gdt→ 0, h→ 0.
On the other hand, since a_1,a_2 ∈ W^1,∞(Q) and a_1=a_2 on the boundary Q, we can continuously extend a on (^2× M_0)∖ Q by 0 and denote the extension by the same letter. Using dV_g = c^n/2dV_g_0dx_1, the change of coordinates (<ref>), Fubini's theorem, the dominated convergence theorem, and the concentration property (<ref>), we obtain
∫_Q e^2iλ (β t+x_1)β av_sw_sdV_gdt
= ∫_∫_∫_M_0 e^2iλ (β t+x_1) c β av_sw_s dV_g_0dx_1dt
= ∫_∫_∫_M_0β^2e^2iλ ((β^2-1) +p) (c a)v_sw_s dV_g_0dp d
→ β^2 (1-β^2)^-n-6/4∫_∫_∫_0^L/√(1-β^2) e^2iλ ((β^2-1) +p)-2(1-β^2)λ r (ca)(β, p-, γ(√(1-β^2)r))
e^Φ_1(, p, r)+Φ_2(, p, r)η(,p, r)drdp d, h→ 0.
Note that β∈ [1/√(3), 1), hence we conclude that
∫_0^L/√(1-β^2)∫_∫_ e^2iλ ((β^2-1) +p)-2(1-β^2)λ r (ca)(β, p-, γ(√(1-β^2)r))
e^Φ_1(, p, r)+Φ_2(, p, r)η(,p, r)dp d dr=0.
We next follow the arguments in <cit.> closely to prove that (<ref>) holds when e^Φ_1(, p, r)+Φ_2(, p, r)η(,p, r) is removed from the integral. To this end, let us write η(t̃,p, r) := η_1(t,r)η_2(p) in (<ref>) with η_1 = 0 and denote
Ψ(, p, r)= e^2iλ[(β^2-1)+p-i(β^2-1)r]η_1(, r).
It follows from direct computations that Ψ = 1/2(_t̃ - i_r)Ψ=0.
Since a is supported in Q, we get from (<ref>) that
∫_∫_∫_Ψ(, p, r) (ca)(β, p-, γ(√(1-β^2)r)) e^Φ_1(, p, r)+Φ_2(, p, r)η_2(p)dp d dr=0,
and there exists a constant R>0 such that a⊂⊂ B_,p, r(0, R).
Also, since η_2∈ C^∞() is arbitrary, for almost every p∈ we have
∫_Ω_pΨ(, p, r) (ca)(β, p-, γ(√(1-β^2)r)) e^Φ_1(, p, r)+Φ_2(, p, r)d dr=0,
where Ω_p={(t̃,r): (t̃, p, r, y)∈ Q}. We shall view Ω_p as a domain in the complex plane with the complex variable z = t̃ + ir.
Recall that it was explained in the formulae (<ref>)–(<ref>) how to transform the hyperbolic operator ℒ_c,g,a,q into another operator ℒ_g̃, ã, q̃ of the same type, where the contribution of the conformal factor c was moved from the highest order term to the lower order ones. In particular, we have ã=ca, and the construction of the Gaussian beams in Section <ref> is carried over for this damping coefficient. Hence, equation (<ref>) yields
(Φ_1+Φ_2)= 1/4β (ca)(β, p-, γ(√(1-β^2)r)),
Thus, it follows from (<ref>) and (<ref>) that
∫_Ω_p(Ψ(z, p)e^Φ_1(z, p) +Φ_2(z, p))dz ∧ dz=0 for almost every p.
We now discuss the regularity of Φ_i. To that end, using (<ref>) along with the fact that a(·, p)∈ L^∞() is compactly supported for almost every p, we see that Φ_i∈ L^p() for 1≤ p≤∞. By the boundedness of the Beurling-Ahlfors operator ^-1 on L^p(), 1<p<∞, we get Φ_i=^-1(Φ_i), which implies that ∇_gΦ_i∈ L^p(). Furthermore, since Φ_i∈ L^∞(), we have Φ_i(·, p)∈ W^1,p_loc(), 1<p<∞. Hence, we conclude that
Φ_i(·, p) ∈ H^1(Ω_p), i=1,2. Thus, an application of Stokes' theorem <cit.> yields
∫_Ω_pΨ(z, p)e^Φ_1(z, p) +Φ_2(z, p)dz =0.
By <cit.>, see also <cit.>, there exists a non-vanishing function F∈ C(Ω_p), anti-holomorphic in Ω_p, such that
F|_Ω_p=e^Φ_1 +Φ_2|_Ω_p.
Furthermore, the arguments in the proof of <cit.> show that there exists an anti-holomorphic function G∈ C(Ω_p) such that F=e^G in Ω_p, and we may assume that G= Φ_1 +Φ_2 on Ω_p. Choosing η_1 = Ge^-G in (<ref>), we get from (<ref>) that
∫_Ω_p(Φ_1(, p, r) +Φ_2(, p, r))e^2iλ[(β^2-1)+p-i(β^2-1)r]dz=0.
Applying Stokes' theorem again, using (<ref>), and integrating over the p variable, we obtain
∫_0^L/√(1-β^2)∫_∫_ e^2iλ ((β^2-1) +p)-2(1-β^2)λ r (ca)(β, p-, γ(√(1-β^2)r))dp d dr=0.
Finally, we use (<ref>) to return to (t,x_1,τ) coordinates from (t̃, p, r) coordinates and replace 2λ with λ. After these changes, (<ref>) becomes
∫_0^L∫_∫_ e^iλ (β t+x_1)-√(1-β^2)λτ (ca)(t, x_1, γ(τ)) dx_1dt dτ=0.
We are now ready to utilize Assumption <ref>, the invertibility of the attenuated geodesic ray transform on (M_0,g_0). To that end, we let ℱ_(t, x_1)→ (ξ_1, ξ_2) be the Fourier transform in the Euclidean variables (t,x_1) and define
f(x', β, λ) := ∫_∫_ e^iλ (β t+x_1) (ca)(t, x_1, x')dx_1dt
=ℱ_(t, x_1)→ (ξ_1, ξ_2)(ca)|_(ξ_1, ξ_2)=-λ(β, 1),
for x'∈ M_0, β∈ [1/2, 1), λ∈.
Since a∈ W^1,∞(Q), we see that the function f(·, β, λ) is continuous on M_0. Furthermore, as γ is an arbitrarily chosen non-tangential geodesic in (M_0,g_0), we get from (<ref>) that the following attenuated geodesic ray transform vanishes,
∫_0^L e^-√(1-β^2)λτ f(γ(τ), β, λ)dτ=0.
By Assumption <ref>, there exists ε>0 such that f(γ(τ), β, λ)=0 whenever √(1-β^2)|λ|<ε. Hence, there exist β_0∈ (1/√(3), 1), λ_0>0, and δ>0 such that for every (λ, β)∈^2 that satisfies |β-β_0|, |λ-λ_0|<δ, and λ 0, we have √(1-β^2)|λ|<ε. In particular, the mapping (λ, β)↦ -λ(β, 1) is a diffeomorphism when λ≠ 0, implying that ℱ_(t, x_1)→ (ξ_1, ξ_2)(ca)=0 in an open set of ^2. Lastly, the compact support of a and Paley-Wiener theorem yield that (ca) is real analytic. Therefore, we get ca=0 in Q. Since c is a positive function, we must have a=a_1-a_2=0.
To complete the proof of uniqueness for the damping coefficient in Theorem <ref>, we still need to verify Lemma <ref>.
We shall prove estimate (<ref>) first. To this end, using estimates (<ref>) and (<ref>), the CGO solution (<ref>), and the Cauchy-Schwartz inequality, we get
|∫_M c^-1_t u(T, x)u_1(T, x)dV_g|
≤c^-1_L^∞(Q)∫_M |_t u(T, x)u_1(T, x)|dV_g
≤𝒪(1) ∫_M | e^-s(β T+x_1)_tu(T,x)|(|v_s(T,x)|+|r_1(T,x)|)dV_g
≤𝒪(1) e^-s(β T+x_1)_tu(T,·)_L^2(M).
Utilizing the boundary Carleman estimate (<ref>) and equation (<ref>), we obtain
e^-s(β T+x_1)_tu(T,·)_L^2(M)≤𝒪(h^1/2)e^-s(β t +x_1)ℒ_c,g,a_1,q_1u_L^2(Q).
We then
substitute the CGO solution (<ref>) for u_2 into (<ref>) to get
e^-s(β t +x_1)ℒ_c,g,a_1,q_1u
= e^-s(β t+x_1)(a_tu_2+qu_2)
=[asβ(w_s+r_2)+a(_tw_s+_t r_2)+q(w_s+r_2)].
We recall that s=h^-1+iλ and use estimates (<ref>), (<ref>), and (<ref>) to obtain
e^-s(β t +x_1)ℒ_c,g,a_1,q_1u_L^2(Q)
=
{[ 𝒪(h^-1), if a≠ 0 ,; ; 𝒪(1), if a =0. ].
Therefore, estimates (<ref>)–(<ref>) imply (<ref>).
We next prove estimate (<ref>). To that end, for all ε>0 we set
M_+, ε={x∈ M: _νφ(x)>ε},
and Σ_+, ε=(0, T)× M_+, ε.
We recall that in Section <ref> we defined the open sets U',V' ⊂ M such that they contain the back and front faces M_+, M_- of the manifold M, respectively. By the compactness of {x∈ M: _νφ(x)=0}, there exists ε>0 such that Σ∖ V ⊂Σ_+, ε, where we had set V=(0,T)× V'.
We utilize estimates (<ref>), (<ref>), (<ref>), as well as the Cauchy-Schwartz inequality to get
|∫_Σ∖ V_ν uvdS_g dt|
≤∫_Σ_+, ε e^-s(β t +x_1) |_ν u| (|v_s|+|r_1|)dS_gdt
≤ C (∫_Σ_+, ε |e^-s(β t +x_1)_ν u |^2dS_gdt)^1/2(v_s_L^2(Σ_+,ε)+r_1_L^2(Σ)).
Next we estimate the terms in the inequality above. By Proposition <ref> and estimate (<ref>), in conjunction with the inequalities
r_1_L^2(Σ)≤r_1_L^2(Q)^1/2r_1_H^1(Q)^1/2 and r_1_H^1(Q)≤ Ch^-1r_1_H^1_(Q),
we obtain
r_1_L^2(Σ) =o(h^-1/2), h → 0.
Then we follow the steps in the proof of <cit.> to verify
v_s_L^2(Σ_+,ε)=(1), h → 0.
Due to the product structure of Σ_+,ε and the fact that T>0 is finite, it suffices to prove that v_s_L^2( M_+,ε)=(1).
First, we observe that M_+,ε⊂ M is an open and precompact manifold of dimension n-1, same as M_0. Clearly, the projections π_1 M →, π_1(x_1,x')=x_1, and π_2: M→ M_0, π_2(x_1,x')=x'=(x_2,…,x_n), are smooth. From here, our aim is to show that π_2 is a local diffeomorphism in M_+,ε. To accomplish this, we note that by definition, the vector field x_1 is transversal to M on M_+,ε. Thus, if z_2,…, z_n are some local coordinates in M_+,ε, the functions x_1,z_2,… z_n form local coordinates in × M_0 near z_0. Moreover, the map x ↦ (x_1,x') is a diffeomorphim whose Jacobian matrix in the chosen coordinates writes
[ x_1/ x_1 x_1/ z_2 … x_1/ z_n; x_2/ x_1 x_2/ z_2 … x_2/ z_n; ⋮ ⋮ ⋱ ⋮; x_n/ x_1 x_n/ z_2 … x_n/ z_n ]
=
[ 1 x_1/ z_2 … x_1/ z_n; 0 x_2/ z_2 … x_2/ z_n; ⋮ ⋮ ⋱ ⋮; 0 x_n/ z_2 … x_n/ z_n ].
Thus, the (n-1)× (n-1) matrix x_α/ z_β for α,β= 2,…, n, which is also the differential of π_2, is invertible. By the inverse function theorem, the map π_2 is a local diffeomorphism in M_+,ε.
Let x ∈ M_+,ε be an arbitrary point, and let 𝒰⊂ M_+,ε be a neighborhood of x such that π_2|_𝒰 is a diffeomorphism. Then it follows from the change of variables formula that the pullpack of the surface elements satisfy (π_2)^∗(dS_g)=J_π_2dV_g_0, where J_π_2 is the Jacobian of π^-1_2. Therefore, we have
∫_𝒰|v_s|^2dS_g=∫_π_2(𝒰)|v_s∘π_2^-1|^2J_π_2dV_g_0.
Furthermore, after possibly choosing a smaller set 𝒰, we have that the Jacobian J_π_2 is bounded on π_2(𝒰) ⊂ M_0. Therefore, we deduce from (<ref>) and (<ref>) that
∫_𝒰|v_s|^2dS_g=∫_π_2(𝒰)|v_s∘π_2^-1|^2J_π_2dV_g_0=(1).
Since x ∈ M_+,ε was arbitrarily chosen, we can choose larger ε and obtain a finite cover for M_+,ε consisting of the sets 𝒰 as above by shrinking M_+,ε. This leads to estimate (<ref>).
Whence, estimates (<ref>) and (<ref>) yield
v_s_L^2(Σ_+,ε)+r_1_L^2(Σ)=o(h^-1/2), h→ 0.
On the other hand, we have
(∫_Σ_+, ε|_ν u e^-s(β t +x_1)|^2dS_gdt)^1/2 = 1/√(ε)(∫_Σ_+, εε |_ν u e^-s(β t +x_1) |^2dS_gdt)^1/2
≤1/√(ε)(∫_Σ_+, ε_νφ |_ν u e^-s(β t +x_1) |^2dS_gdt)^1/2
≤1/√(ε)(∫_Σ_+_νφ|_ν u e^-s(β t +x_1) |^2dS_gdt )^1/2,
where we used Σ_+=(0,T)× M_+^int.
Using the boundary Carleman estimate (<ref>) and equation (<ref>), we get
(∫_Σ_+_νφ|_ν u e^-s(β t +x_1) |^2dS_gdt)^1/2≤𝒪(h^1/2)1/√(ε)e^-s(β t +x_1)ℒ_c,g,a_1,q_1u _L^2(Q).
Therefore, estimates (<ref>) and (<ref>) yield (<ref>). This completes the proof of Lemma <ref>.
§.§ Uniqueness of the potential
In this subsection we assume that a_1=a_2 and prove that 𝒞_g,a_1, q_1=𝒞_g,a_2,q_2 implies q_1=q_2. Our starting point is again the integral identity (<ref>). When a_1-a_2=a=0, this reads
∫_Q qu_2u_1dV_gdt= ∫_M _t u(T, x)u_1(T, x)dV_g- ∫_Σ∖ V_ν uu_1dS_g dt.
Since a=0, Lemma <ref> implies that both terms on the right-hand side of (<ref>) vanish in the limit h→ 0. Therefore, we have
∫_Q qu_2u_1dV_gdt→0, h→ 0.
On the other hand, by substituting the CGO solutions (<ref>) and (<ref>) into the left-hand side of (<ref>), we get
∫_Q qu_2u_1dV_gdt=∫_Q qe^2iλ (β t+x_1) (v_sw_s +v_sr_2+ w_sr_1+r_1r_2)dV_gdt.
It follows from estimates (<ref>), (<ref>), and (<ref>) that
∫_Q q e^2iλ (β t+x_1) (v_sr_2+w_sr_1+r_1r_2)dV_gdt=o(1), h→ 0.
Therefore, we obtain
∫_Q q e^2iλ (β t+x_1)v_sw_s dV_gdt→0, h→ 0.
By repeating the arguments leading from (<ref>) to (<ref>), with the assumptions that q_1,q_2 ∈ C(Q) and q_1=q_2 on Q, we get
∫_0^L/√(1-β^2)∫_∫_ e^2iλ ((β^2-1) +p)-2(1-β^2)λ r (cq)(β, p-, γ(√(1-β^2)r))
e^Φ_1(, p, r)+Φ_2(, p, r)η(,p, r)dp d dr=0.
Then we follow the same arguments from (<ref>) onward in the proof for the uniqueness of the damping coefficient to obtain q_1=q_2. This completes the proof of Theorem <ref>.
□
abbrv
|
http://arxiv.org/abs/2306.02820v1
|
20230605121913
|
Time Dependent Inverse Optimal Control using Trigonometric Basis Functions
|
[
"Rahel Rickenbach",
"Elena Arcari",
"Melanie N. Zeilinger"
] |
eess.SY
|
[
"eess.SY",
"cs.SY"
] |
Asteroseismology using quadrupolar f-modes revisited: breaking of universal relationships in the slow hadron-quark conversion scenario
Lucas Tonetto
July 31, 2023
======================================================================================================================================
The choice of objective is critical for the performance of an optimal controller. When control requirements vary during operation, e.g. due to changes in the environment with which the system is interacting, these variations should be reflected in the cost function.
In this paper we consider the problem of identifying a time dependent cost function from given trajectories. We propose a strategy for explicitly representing time dependency in the cost function, i.e. decomposing it into the product of an unknown time dependent parameter vector and a known state and input dependent vector, modelling the former via a linear combination of trigonometric basis functions. These are incorporated within an inverse optimal control framework that uses the Karush–Kuhn–Tucker (KKT) conditions for ensuring optimality, and allows for formulating an optimization problem with respect to a finite set of basis function hyperparameters. Results are shown for two systems in simulation and evaluated against state-of-the-art approaches.[The datasets generated and/or analysed during the current study are available in the eth research collection repository, https://doi.org/10.3929/ethz-b-000611670.]
Inverse Optimal Control, Trigonometric Basis Functions, Time Dependence.
§ INTRODUCTION
The performance of an optimization-based controller significantly depends on the ability to encode the desired goal in the cost function. This can be challenging in several scenarios, e.g. in the context of autonomous driving <cit.>, or for biomedical applications <cit.>. Particularly for the latter, the ability to specify changes in the control requirements that reflect, e.g., variations of the human body in time, is an essential condition for successfully fulfilling the task at hand.
For instance, maintaining the basal insulin requirement is a concrete example that
shows both increased variability during day and night time, and time-varying profiles over the day <cit.>. Among the non-biomedical applications, another example is given by the optimal control with respect to continuously changing electricity prices <cit.>.
Motivated by these examples, in this paper we tackle the problem of specifying a cost function that explicitly depends on time. To this end, we build on the inverse optimal control (IOC) approach presented in <cit.>, which encodes optimality conditions by exploiting the Karush–Kuhn–Tucker (KKT) conditions <cit.> and Bellman’s Principle of Optimality <cit.>. Within this framework, we incorporate the learning of time features that we model as trigonometric basis functions, taking inspiration from <cit.> and <cit.>: their inherent nonlinearity provides a sufficiently rich description of the objective's time dependency, while preserving the efficiency of parametric models.
The objective is assumed to decompose as the product of an unknown continuous-time parameter vector, and a known state and input dependent vector of arbitrary structure. The unknown time vector is defined as a linear combination of trigonometric basis functions, which allows for formulating the time feature extraction as an optimization problem with respect to a finite set of hyperparameters, e.g. sinusoidal frequencies. These are jointly optimized with the Lagrangian variables arising in the IOC problem formulation. Consequently, the proposed algorithm solves the Lagrangian optimization problem regarding the unknown cost function parameters, as well as a line search over a regularization term for obtaining a sparse solution. Due to non-convexity of the optimization problem, a grid search over initial conditions is performed. Results are shown for two simulation examples, i.e. a multi-layer spring-damper system, and an inverted double pendulum, for which performance is evaluated against state-of-the-art approaches. Furthermore, the reliability of our estimates is investigated by varying both sampling times and horizon lengths of the forward control problem.
The preliminaries are summarized in Section <ref>, and the problem description is given in Section <ref>. This is followed by an explanation of the proposed approach in Section <ref>; while the first subsection is devoted to the introduction of the considered learning approach, the final algorithm follows in Section <ref>. Results are gathered in Section <ref>, and we conclude the paper with a discussion in Section <ref>.
§.§ Related Work
The use of optimal demonstrations to learn the parameters of an initially unknown objective has been widely addressed in the literature <cit.>, both from a reinforcement learning and an optimal control perspective <cit.>. Inverse reinforcement learning (IRL) is often formulated for discrete state and action spaces, and mainly focuses on the reconstruction of either a reward function or a policy from expert demonstrations, and therefore is also considered in the framework of learning from demonstrations or imitation learning <cit.>, <cit.>, <cit.>. On the other hand, IOC exploits the structure of the forward optimization problem to formulate the corresponding cost function learning approach by typically using stability and/or optimality conditions. Regarding the latter, a large variety of objective estimation techniques were developed: for example, the works of <cit.> and <cit.> are inspired by the solution of an LQR problem, while <cit.> relies on the Hamiltonian Jacobi Bellman equation to estimate the unknown objective. Overall, these approaches focus on infinite-horizon optimization problems without considering the presence of potential constraints. An IOC approach including inequality constraints was presented in <cit.> by exploiting KKT optimality conditions. Building on this idea, the method presented in <cit.> additionally exploits Bellman's Principle of Optimality in order to solve the original infinite-horizon formulation using finite-length demonstration trajectories. Other extensions focus on the consideration of uncertain data and noise, e.g. in <cit.>, and also on the estimation of time-varying objectives <cit.>, <cit.>, <cit.>.
While in these works time dependency is integrated by either separating the considered time horizon into windows with constant parameters, or by averaging over multiple windows, the method presented in this paper explicitly models time as a cost function feature, while preserving the consideration of constraints.
The approach of estimating individual cost parameters at each time step, as well as the non-parametric KKT approach, both presented in <cit.> also allow for learning time-varying cost functions in the finite-horizon setting.
However, time dependency is incorporated via a time-varying vector of parameters, whose length matches the duration of the task at hand, and can therefore potentially become prohibitively long. In this work, the use of time features bypasses this issue since it directly provides a model for the relation between cost function and time to be used in the infinite-horizon setting.
§.§ Notation
Throughout the paper ‖·‖ indicates the Euclidean norm, while |·| indicates the 1-norm. When applied to a matrix, the latter refers to the sum over all absolute values of its elements. The set of all non-negative real numbers is indicated with ℝ_+.
§ PRELIMINARIES
In this work, we build upon the shortest path IOC (spIOC) approach developed by <cit.>, considering D>0 finite-length observations provided by a demonstrator. These are assumed to be optimal trajectory segments of the original infinite-horizon constrained optimal control problem, and consist of state and input measurements at time instances k ∈ℕ, with x_d^*(k) ∈ℝ^n and u_d^*(k) ∈ℝ^m, d∈[1,D], collected for different initial conditions x_d^*(k) over a horizon N ∈ℕ. The resulting sequences, indicated by 𝒳_d^* = {x_d^*(k), , x_d^*(k+N)} and 𝒰_d^* = {u_d^*(k), , u_d^*(k+N-1)}, obey the potentially nonlinear but known dynamics x(k+1) = f(x(k),u(k)). Furthermore, they (at least locally) optimally solve the following problem
min_u_i∑_i = 0^N-1ℓ(F_i(U,x_0), u_i,L)
g_p(F_i(U,x_0), u_i) ≤ 0, p = 1, , P
x_0 = x^*_d(k)
x_N = x^*_d(k+N),
whose objective ℓ(·, ·,L) includes an unknown, but constant parameter-vector L. It can also include P known inequality constraints g. The terminal equality constraint (<ref>), in accordance with Bellman's Principle of Optimality, allows to formulate the infinite-horizon problem as a shortest path problem of finite length N. Additionally, the operator F_i(U,x_0), with U = [u_0,,u_N-1], is defined as
F_i(U,x_0) =
x_0 if i = 0,
f(F_i-1(U,x_0),u_i-1) if i ≥ 1,
which allows for elimination of the associated equality constraint at each time-step, and an expression of the optimization problem in terms of U. Introducing the Lagrange multipliers λ_i∈ℝ^p and υ∈ℝ^n, as well as replacing x_0 with x^*(k), the Lagrangian of problem (<ref>) is given by
ℒ(U,λ_i,υ,L,x^*(k),x^*(k+N)) = υ^⊤· (F_N(U,x^*(k)) - x^*(k+N))
+ ∑_i = 0^N-1ℓ(F_i(U,x^*(k)), u_i,L) + λ_i^⊤· g(F_i(U,x^*(k)), u_i).
Finally, to allow for the consideration of potentially sub-optimal data due to noisy observations, the following KKT-based optimization problem
is solved with respect to the unknown parameter L
min_L, λ_d,i,υ_d∑_d = 1^D‖∇_Uℒ(U,λ_d,i,υ_d,L,x^*(k),x^*(k+N))|_U = 𝒰_d^*‖^2
λ_d,i,p· g_p(F_i(𝒰^*_d,x_d^*(k)), u_d^*(i)) = 0, p = 1, , P, d = 1, , D
λ_d,i,p≥ 0, p = 1, , P, d = 1, , D.
§ PROBLEM DESCRIPTION
For the remainder of this paper, the available demonstrations are separated into training and validation data. For this purpose, we define 𝒮_t^* = {(𝒳_1^*,𝒰_1^*), , (𝒳_D_t^*,𝒰_D_t^*)} and
𝒮_v^* = {(𝒳_D_t+1^*, 𝒰_D_t+1^*), , (𝒳_D_t+D_v^*,𝒰_D_t+D_v^*)} as training and validation data sets respectively, with D_d and D_v indicating the number of sequences in each set. All of the collected sequences are assumed to be the result of an optimization problem with a time dependent objective, which we express by defining the parameter L in (<ref>) as a function depending continuously on time t ∈ℝ.
Furthermore, we make the following assumption regarding the structure of the objective:
It is assumed that the partially unknown objective function ℓ is convex for each fixed time instance t, and can be decomposed as
ℓ(x_i, u_i,L(t)) = Θ(t)·ϕ(x_i, u_i),
where Θ(t) = [θ_1(t),,θ_q(t)] ∈ℝ^1 × q describes a q dimensional row vector of unknown continuous, time dependent functions θ_1(t),,θ_q(t). The column vector ϕ(x_i, u_i) ∈ℝ^q is of equal dimension and known.
The structure in (<ref>) allows for flexible cost function choices. Convexity of ϕ(x_i,u_i) can ease the formulation of the associated IOC problem, but the assumption is not a strict requirement (see Remark <ref>). Note that when Θ(t) is a constant vector and ϕ(x_i,u_i) includes squares of states and inputs, the standard quadratic cost function is obtained.
The time dependent formulation of the shortest path IOC optimization problem presented in Section <ref> results in
min_x_i, u_i∑_i = 0^N-1Θ(t)·ϕ(F_i(U,x_0), u_i)
g(F_i(U,x_0), u_i) ≤ 0
x_0 = x^*_d(k)
x_N = x^*_d(k+N),
for which we assume to know all involved constraints. Similarly to (<ref>), we can obtain an estimate of Θ(t) by solving the following optimization problem using the training data in 𝒮^*_t
min_Θ(t), λ_d,i,υ_d∑_d = 1^D_t‖∇_Uℒ(U,λ_d,i,υ_d,Θ(t),x_d^*(k),x_d^*(k+N))|_U = 𝒰_d^*‖^2
λ_d,i,p· g_p(F_i(𝒰_d^*,x_d^*(k)), u_d^*(i)) = 0, p = 1, , P, d = 1, , D_t
λ_d,i,p≥ 0, p = 1, , P, d = 1, , D_t,
with its Lagrangian defined as
ℒ(U,λ_d,i,υ_d,Θ(t),x_d^*(k),x_d^*(k+N)) = υ_d^⊤· (F_N(U,x_d^*(k)) - x_d^*(k+N)) +
∑_i = 0^N-1Θ(t)·ϕ(F_i(U,x_d^*(k)), u_i) + λ_d,i^⊤· g(F_i(U,x_d^*(k)), u_i).
Optimizing over a vector of unknown continuous, time dependent functions θ_1(t),,θ_q(t) results in an intractable problem. For this reason, in the following section, we define a model for Θ(t) consisting of a linear combination of trigonometric basis functions, so that problem (<ref>) can be reformulated as an optimization with respect to a finite set of hyperparameters, i.e. the sinusoids' frequencies.
This allows for overcoming the intractability of estimating a (potentially very long) sequence of time-varying parameters and provides an efficient framework for identifying the relation between cost and time.
The quality of the estimated parameter Θ̂(t) is evaluated by solving the forward optimization control problem fixing Θ̂(t), for each initial state x_d^*(k), d ∈ [D_t+1, D_t+D_v], in the validation set 𝒮^*_v. The optimized sequences are indicated with 𝒳̂_d = {x̂_d(k), , x̂_d(k+N)} and 𝒰̂_d= {û_d(k), , û_d(k+N-1)}, and collected in 𝒮̂_v = {(𝒳̂_D_t+1,𝒰̂_D_t+1), , (𝒳̂_D_t+D_v,𝒰̂_D_t+D_v)}. Consequently, the validation error is defined as an averaged root mean square error between the optimized sequences in 𝒮̂_v and the original validation sequences in 𝒮^*_v
e_v(𝒮̂_v,𝒮_v^*) = 1/D_v∑_d=D_t + 1^D_t+D_v(1/N∑_k=0^N-1‖x̂_d(k) - x^*_d(k)‖^2+‖û_d(k) - u^*_d(k)‖^2)^1/2.
§.§ The Intuition of Using a “Sliding-window" Approach and Where it Fails
r0.28
< g r a p h i c s >
the effect of increasing nonlinearity of Θ(t)
In the previous section, we provided a raw formulation of the time dependent spIOC in (<ref>), and discussed its intractability when optimized as a batch problem. A naive approach for overcoming this consists in solving (<ref>) sequentially, i.e. using a linear Kalman filter to estimate the time dependent parameters Θ(t), exploiting the fact that the result in <cit.> allows for partitioning the demonstrator sequences in small windows. The idea is to use a “sliding window" approach, in which we iteratively re-estimate a portion θ_M ∈ℝ^1 × M of the vector Θ(t). We assume that the estimation window θ_M stays constant over the window length M, and choose its length in order to keep the estimation problem well-defined. The following example aims at offering an intuitive explanation of when this approach fails, therefore motivating the choice of learning a model for time dependency. In particular, we consider a one-element spring-damper system.
As shown in Figure <ref>, as the order of time-dependency in Θ(t) increases, the validation error computed as in (<ref>) grows due to the inability of the sliding window approach to capture the increasing degree of nonlinearity in time.
§ TRIGONOMETRIC TIME DEPENDENT IOC
In the following section we present a strategy to address the limitations of a recursive parameter estimation discussed in Section <ref>, by explicitly introducing a model for time dependency. The idea is to construct a model using a linear combination of trigonometric basis functions and to optimize their hyperparameters within an IOC framework. By learning a model of the cost function's relation with time, we can exploit its predictive capabilities for evaluation on unseen time instances. We refer to this approach as
trigonometric time dependent IOC approach (TTD-IOC).
§.§ Trigonometric Basis Functions as Time Features
The model we consider for each time dependent cost function parameter is
θ_m(t) = α_1,m1 + α_2,mcos(ω_1t) + α_3,msin(ω_1t) +
+ α_2E,mcos(ω_Et) + α_2E+1,msin(ω_Et),
consisting of 2E basis functions, defined by the set of frequencies 𝒲 = {ω_1, , ω_E}. Potential offsets are modelled via an additional constant bias.
We define the basis functions vector Ω(𝒲,t) ∈ℝ^1× (2E+1) as a row vector consisting of 2E+1 elements
Ω(𝒲,t) = [1, cos(ω_1t), sin(ω_1t), , cos(ω_Et), sin(ω_Et)],
together with the matrix A ∈ℝ^(2E+1) × q consisting of parameters that linearly combine the basis functions
A_(2E+1)× q =
[ [ α_11 α_1q; ⋮ ⋱ ⋮; α_(2E+1)1 α_(2E+1)q; ]],
resulting in the following model for the time dependent vector Θ(t) as
Θ(t) = Ω(𝒲,t)A.
The model in (<ref>) assumes that all cost parameter time dependencies can be approximated with the same set of frequencies 𝒲. Considering individual frequencies for each time dependent cost parameter is possible, however, it increases the number of unknown parameters and, accordingly, the amount of required data.
§.§ Regularized Optimization Problem for TTD-IOC
In the following, we present the proposed algorithm for optimizing the hyperparameters 𝒲 and A and introduce the regularization term for obtaining a sparse solution.
For this purpose we include the model (<ref>) into the optimization problem presented in equation (<ref>) and consider that measurements are only available for time instances t=kT_s, where T_s indicates the sampling time. Additionally, the objective is extended with a lasso inspired regularizer β∈ℝ_+. Defining ℒ_U,d = ℒ(U,λ_d,i,υ_d,Ω (𝒲,kT_s)A,x^*_d(k),x^*_d(k+N)), the regularized, time dependent Lagrangian optimization problem results in
𝒲̂, Â =argmin_𝒲, A, λ_d,i,υ_d∑_d = 1^D_t‖ (∇_Uℒ_U,d)|_U = 𝒰_d^*‖^2 + β| A(2:2E+1,2:q) |
λ_d,i,p· g_p(F_i(𝒰_d^*,x_d^*(k)), u_d^*(i)) = 0, p = 1, , P, d = 1, , D_t
λ_d,i,p≥ 0, p = 1, , P, d = 1, , D_t
A(:,0) = v_α^*.
min_𝒲, A, λ_d,i,υ_d∑_d = 1^D_t‖ (∇_Uℒ_U,d)|_U = 𝒰_d^*‖^2
λ_d,i· g(F_i(𝒰_d^*,x_d^*(k)), u_d^*(i)) = 0
λ_i≥ 0,
L0.35
0.35
The lasso-inspired regularizer thereby penalizes the sum of the absolute values of the sub-matrix of A to minimize the amount of time-varying cost parameters, offering an automatic selection of the model complexity.
Furthermore, the trivial solution of setting all elements of A equal to zero is excluded from the set of potential solutions by adding the equality constraint A(:,1) = v_α^*, where v_α^*∈ℝ^2E+1 is user-defined. Finally, the estimated feature dependent cost parameter are obtained as Θ̂(t) = Ω(𝒲̂,t)Â. Note that the mentioned predictive capabilities with respect to unseen time instances allow for considering different sampling time instances for further predictions.
Depending on the cost features ϕ(x_i, u_i), further constraints can be added to the proposed optimization problem in (<ref>) to preserve the convexity of the estimated cost function ℓ(·, ·, ·) for each fixed time instance t, e.g. non-negativity constraints on Θ(t) for convex ϕ(x_i, u_i).
The proposed algorithm, presented in Algorithm <ref>, consists of a line search over β, between β_i and β_f, with a step-wise increase of β_s. For all values of β the optimization problem in (<ref>) is solved, and the quality of the respective parameter estimate is evaluated with respect to the resulting validation error introduced in (<ref>). The initial value of e̅_β is chosen sufficiently high, making sure that it is adjusted in the algorithm's first iteration.
§ RESULTS
In this section, we analyse the proposed algorithm by applying it to two illustrative simulation scenarios: the first is a linear dynamical system, i.e. a three-layer spring-damper system (sys1)
, while the second system consists of two inverted pendulums (sys2) connected via a spring-damper element, as an example of nonlinear dynamics (see Figure <ref>).
For both systems, we choose the cost function parameter associated with one input to vary with time and design ϕ(x,u) to include squares of all states and inputs.
The cost parameters of the considered forward problems from which we obtain training and validation demonstrations are presented in Table <ref>. For the time dependent cost parameter θ_m,i we define three different continuous functions
θ_m,1(t)= 4 + 1.5cos(2t)+1.5cos(3t)
θ_m,2(t)= 1.5+0.02t^2-0.01t
θ_m,3(t)= 4 + e^0.2t
Results are obtained using a HP ProBook 440 with an Intel Core i7 processor, while using the Ipopt optimization framework <cit.> within the Casadi framework <cit.>. The issue of dealing with a nonconvex optimization problem is thereby addressed via a grid search over suitable frequency values (ω_init) for the initialization of the applied solver. Its step size and the grid corners can be adjusted by altering the values of ω_i, ω_f, and ω_s. For the subsequent experiments with sys1 they are chosen as 0.5,2.5, and 2.0 and as 0.11,0.11, and 0.0 for sys2, respectively. The line search parameters are set to β_i = 0.04,β_f = 0.06, and β_s=0.01 for sys1 and to β_i = 0.058,β_f = 0.061, and β_s=0.003 for sys2.
The performance of our proposed approach is evaluated on each system with respect to the validation error defined in (<ref>).
§.§ Multi-layer Spring-Damper System
The considered multi-layer spring-damper system consists of three stacked mass elements, m_1, m_2, and m_3, connected to each other or the wall by a spring-damper pair with spring constants k_1 up to k_3 and damping constants d_1 up to d_3. The input is given by a force vector F = [f_1,f_2,f_3] consisting of three forces that can be exerted on their respective mass. An illustration of the system is given in the left part of Fig. <ref> and the dynamics by equation (<ref>).
ẍ_1 = m_1^-1(-(k_1+k_2)x_1 -(d_1+d_2)ẋ_1 + k_2x_2 + d_2ẋ_2 + f_1)
ẍ_2 = m_2^-1(k_1x_1+ d_1ẋ_1-(k_2+k_3)x_2-(d_2+d_3)ẋ_2 + k_3x_3 + d_3ẋ_3 + f_2)
ẍ_3 = m_3^-1(k_3x_2 + d_3ẋ_2 -k_3x_3 -d_3ẋ_3 + f_3)
The proposed modelling of time dependency within the presented approach enables an improvement with respect to the vanilla spIOC described in Section <ref>. We measure the improvement in terms of validation error, and confirm the ability of the learned cost to generalize for unseen scenarios, i.e. different initial conditions, as can be observed in the left plots of Figure <ref>.
§.§ Inverted Double Pendulum
The inverted double pendulum consists of two inverted pendulums of length l, having a mass element m attached at its top. At the height indicated with a they are connected via a spring-damper pair whose constants are indicated by k and d. Each pendulum can be actuated by its individual torque τ_1 or τ_2. An illustration of the system is given in the right picture of Fig. <ref> and the dynamics by equation (<ref>). Defining F = ka(sin(ϕ_2)-sin(ϕ_1)) + da(cos(ϕ_2)ϕ̇_̇2̇ - cos(ϕ_1)ϕ̇_̇1̇) it follows
ϕ̈_1 = gl^-1sin(ϕ_1) +acos(ϕ_1)F+(ml^2)^-1τ_1
ϕ̈_2 = gl^-1sin(ϕ_2) -acos(ϕ_2)F+(ml^2)^-1τ_2.
The resulting validation errors for our considered nonlinear system are given in the right plots of Figure <ref>.
Similarly to the previous example, the proposed approach shows again improved validation errors with respect to spIOC.
§.§ Comparative Study
r0.35
< g r a p h i c s >
The effect of varying E.
In this section, we investigate the effects of varying sampling times and horizon lengths on the validation error. For this purpose, the parameter estimates of sys1, obtained in accordance with training data sequences 𝒮_t^* with a horizon length of N=60 and sampling time T_s = 0.1, are used to obtain optimal sequences for an adjusted horizon length or an adjusted sampling time. They are evaluated with respect to new and unseen validation data of given N and T_s. The validation errors resulting from a variation in sampling time are depicted in Figure <ref>, and those for different horizon lengths in Figure <ref>. Furthermore, we tested the performance of the approach in the presence of model mismatch, specifically when the number of chosen basis functions E is different from the true model. Assuming that the true cost function is described by 2 trigonometric features, the value E is varied in the set {1,2,3,4}, and the obtained validation errors are displayed in Figure <ref>.
§.§ Discussion
The results obtained in Section <ref> and <ref> show an increased ability to mimic the optimal input and output sequences of underlying continuously time dependent objective functions in comparison to an existing IOC approach that does not model time dependent features (spIOC). Furthermore, in Figure <ref> and Figure <ref>, it can be observed that modeling the time dependency as an explicit feature allows for the consideration of different sampling times and horizon lengths.
In general, a better estimate in terms of validation error results in better generalization capabilities. The investigation of varying the number of frequencies in the considered trigonometric feature vector indicates that, given the regularization parameter β, a larger value of E does not lead to a degradation in terms of the validation error. Furthermore, by inspecting the elements of A it can be seen that the parameters associated with unused basis functions are close to zero. While the validation error increases when choosing fewer frequencies, the proposed approach still outperforms the spIOC estimate.
§ CONCLUSION
This paper has presented a procedure for modeling time dependency as an explicit cost function feature using linear combinations of trigonometric basis functions. Building on previous results, we extended the shortest path IOC approach to include the additional optimization over the time feature hyperparameters, and discussed its performance by analysing two simulation examples. Results show lower validation error compared with the shortest path IOC approach. Furthermore, the use of a model provides an estimate for not only seen but also unseen instances in time, and the proposed method thereby also provides high-quality predictions (i.e. low validation errors) when varying the sampling time, as well as the horizon length in the forward optimization problem.
In the future, we plan to examine further the effect of the amount of training data on the validation error and to compare the proposed solution strategy for addressing the nonconvexity of the Lagrangian optimization problem against, e.g., sampling-based routines, as well as test the proposed procedure in real-world problems.
|
http://arxiv.org/abs/2306.08497v1
|
20230614132452
|
Insensitizing control problem for the Hirota-Satsuma system of KdV-KdV type
|
[
"Kuntal Bhandari"
] |
math.AP
|
[
"math.AP"
] |
This paper is concerned with the existence of insensitizing controls for a nonlinear coupled system of two Korteweg-de Vries (KdV) equations, typically known as the Hirota-Satsuma system.
The idea is to look for controls such that some functional of the states (the so-called sentinel) is insensitive to the small perturbations of initial data. Since the system is coupled,
we consider a sentinel in which we observe both components of the system in a localized observation set. By some classical argument, the insensitizing problem is then reduced to a null-control problem for an extended system where the number of equations is doubled. We study the null-controllability for the linearized model associated to that extended system by means of a suitable Carleman estimate which is proved in this paper. Finally, the local null-controllability of the extended (nonlinear) system is obtained by applying the inverse mapping theorem, and this implies the required insensitizing property for the concerned model.
Adaptive Modeling of Satellite-Derived Nighttime Lights Time-Series for Tracking Urban Change Processes Using Machine Learning
[
July 31, 2023
==============================================================================================================================
§ INTRODUCTION
§.§ Statement of the problem
In this article, we study an insensitizing control problem for the Hirota-Satsuma system of two coupled Korteweg-de Vries (KdV) equations. In 1981, R. Hirota and J. Satsuma <cit.> have proposed a system of two coupled KdV equations, namely
u_t -1/2u_xxx - 3uu_x + 6vv_x = 0,
v_t + v_xxx + 3uv_x =0,
which describes the interactions of two long waves with different dispersion relations. They have presented a soliton solution of the above system, and shown that it has two- and three-soliton solutions under a special connection between the dispersion relations of the two long waves; we refer <cit.> for more details.
Let us describe the problem on which we are going to work in the present article. Let T>0 be given finite time and L>0 be any finite length. Denote Q_T:= (0,T)× (0,L). We further assume two non-empty open sets ω⊂ (0,L) and 𝒪⊂ (0,L) verifying 𝒪∩ω≠∅, and that there is a non-empty open set ω_0 such that ω_0 ⊂⊂𝒪∩ω.
We now consider the following coupled KdV system:
u_t -1/2u_xxx - 3uu_x + 6vv_x = h_1 1_ω + ξ_1 in Q_T,
v_t + v_xxx + 3uv_x = h_2 1_ω + ξ_2 in Q_T,
u(t,0) = u(t,L) = u_x(t,0) = 0 for t∈ (0,T),
v(t,0) = v(t,L) = v_x(t,L) =0 for t∈ (0,T) ,
u(0) = u_0 + τu_0 , v(0) = v_0 + τv_0 in (0,L) ,
where u=u(t,x) and v=v(t,x) are the state variables, h_i=h_i(t,x), i=1,2 are localized control functions acting on the subset ω, and ξ_i=ξ_i(t,x), i=1,2, are given external source terms. In (<ref>) the initial states (u(0),v(0)) are partially unknown in the following sense:
– (u_0,v_0) ∈ [L^2(0,L)]^2 are given, and
– (u_0 , v_0)∈ [L^2(0,L)]^2 are unknown which satisfy u_0 _L^2(0,L) = v_0 _L^2(0,L) = 1. They represent some uncertainty on the initial data.
– τ∈ℝ is some unknown parameter which is small enough.
Our aim is to study the insensitizing control problem for the system (<ref>). This topic has been originally introduced by J.-L. Lions <cit.> which concerns with the existence of controls that make a certain functional (depending on the state variables) insensible
with respect to small perturbations of the initial data.
In our case, we consider the following functional (the so-called sentinel) defined on the set of solutions to (<ref>), given by
J_τ(u,v) : = 1/2∬_(0,T)×𝒪 |u|^2 + 1/2∬_(0,T)×𝒪 |v|^2 ,
where 𝒪 is the so-called observation domain. Then, the insensitizing control problem associated to (<ref>) can be stated as follows.
Let (u_0, v_0)∈ [L^2(0,L)]^2 and (ξ_1, ξ_2)∈ [L^2(Q_T)]^2 be given. We say that the control functions (h_1,h_2)∈ [L^2((0,T)×ω)]^2 insensitize the sentinel functional J_τ given by (<ref>), if
.∂ J_τ(u,v)/∂τ|_τ=0=0, ∀ (u_0,v_0)∈ [L^2(0,L)]^2 with u_0 _L^2(0,L)=v_0_L^2(0,L) = 1.
Thus, it amounts to find a pair of control functions (h_1,h_2)∈ [L^2((0,T)×ω)]^2 such that the uncertainty in the initial data does not affect the measurement of J_τ. When the condition (<ref>) holds for a pair of controls (h_1,h_2), the sentinel J_τ is said to be locally insensitive to the perturbations of initial data. In other words, (<ref>) indicates that
the sentinel does not detect the variations
of the given initial data (u_0,v_0) by small unknown perturbation (τu_0, τv_0) in the observation domain 𝒪.
§.§ Bibliographic comments
To begin with, the controllability of dispersive systems is gaining its popularity among several researchers; the first internal controllability results regarding the single KdV equations have been presented by Russell and Zhang <cit.> in the periodic domains. We also cite the work <cit.> by Rosier and Zhang related to this topic. In <cit.>, Glass and Guerrero proved the exact internal controllability of the KdV equation when the control acts in a neighborhood of the left endpoint. Later on, Capistrano-Filho, Pazoto and Rosier <cit.> established a Carleman estimate yielding an observability inequality to conclude the internal controllability for the KdV equations.
More recently, the internal null-controllability of a generalized Hirota-Satsuma system (which is a coupled system of three KdV equations with a first order coupling) has been proved by Carreño, Cerpa and Crépeau in <cit.>.
On the other hand, the boundary controllability for KdV equations has been intensively investigated by several renowned researchers, see for instance, <cit.>. Most of those works were concerned with the system
u_t + u_x +u_xxx + uu_x = 0 in (0,T)× (0,L) ,
u(t,0)= p_1(t), u(t,L) = p_2(t), u_x(t,L) = p_3(t) in (0,T) ,
where p_1,p_2, p_3 are control inputs. In particular,
Rosier <cit.> proved that the linearized model of (<ref>) is exactly controllable with a control p_3∈ L^2(0,T) (and p_1=p_2=0) if and only if L does not belong to the following countable set of critical length
𝒩 = {2π/√(3)√(k^2+ kl + l^2) : k,l ∈ℕ^* } .
He has also shown that when the linearized equation is controllable, the same holds true for the nonlinear equation. But the converse is not necessarily true,
as it has been proved in <cit.> that the nonlinear KdV equation is still
controllable even if L∈𝒩. Finally, we refer <cit.> where the boundary controllability of some system of KdV equations have been addressed.
Let us talk about the insensitizing control problems for pdes. We mention that the pioneer results concerning the existence of insensitizing controls were obtained by de Teresa in <cit.>, and by Bodart et al. in <cit.> for the linear and semilinear heat equations.
Since then, many works have been devoted to study the insensitizing problem from different perspectives. The authors in <cit.> study such problems for linear and semilinear heat equations with different types of nonlinearities and/or boundary conditions. In the direction of wave equation, Alabau-Boussouira <cit.> proved the existence of exact insensitizing controls for the scalar wave equation.
Guerrero <cit.> has considered an insensitizing control problem for the linear parabolic equation where the sentinel functional is dependent on the gradient of the solution. Later, he has studied such control problem for the Stokes equation <cit.> where the sentinel is taken in terms of the curl of solution.
In the context of insensitizing problems for Navier-Stokes equations we quote the works <cit.>, and for the Boussinesq system we cite <cit.>. We also refer <cit.> where the insensitizing control problem for a phase field system has been explored. Furthermore, a numerical study for the insensitizing property of semilinear parabolic equations has been pursued in <cit.>.
It is worth mentioning that insensitizing problems for the fourth-order parabolic equations have been treated in <cit.> and with respect to shape variations in <cit.>. We also bring up a recent work <cit.> where the authors studied the insensitizing property for the fourth-order dispersive nonlinear Schrödinger equation with cubic nonlinearity.
Last but not the least, we address a very recent work <cit.> where the insensitizing control problems for the stabilized Kuramoto-Sivashinsky system have been analyzed.
In the current work, we investigate the insensitizing property of the so-called Hirota-Satsuma model, which is basically a nonlinear coupled KdV system as mentioned earlier, and to the best of our knowledge, this problem has not been addressed in the literature.
§.§ Main results
As mentioned earlier, our goal is to prove the existence of control functions (h_1, h_2) which insensitize the functional J_τ given by (<ref>).
In this regard, our control result is the following.
Assume that ∩ω≠∅ and u_0≡ v_0≡ 0. Then, there exist constants C>0 and δ>0
such that for any (ξ_1,ξ_2)∈ [L^2(Q_T)]^2 satisfying
e^C/t(ξ_1, ξ_2)_[L^2(Q_T)]^2≤δ ,
one can prove the existence of control functions (h_1,h_2)∈ [L^2((0,T)×ω)]^2 which insensitize the functional J_τ in the sense of (<ref>).
To prove the above theorem, we shall equivalently establish the result given by <Ref> below. In fact, adapting the arguments in <cit.> or <cit.>, it can be proved that the insensitivity condition (<ref>) is equivalent to a null-control problem for an extended system, which is in our case given by
u_t -1/2u_xxx - 3uu_x + 6vv_x = h_1 1_ω + ξ_1 in Q_T,
v_t + v_xxx + 3uv_x = h_2 1_ω + ξ_2 in Q_T,
u(t,0) = u(t,L) = u_x(t,0) = 0 for t∈ (0,T),
v(t,0) = v(t,L) = v_x(t,L) =0 for t∈ (0,T) ,
u(0) = u_0, v(0) = v_0 in (0,L) ,
-p_t + 1/2p_xxx - 3p u_x + 3 q v_x = u 1_𝒪 in Q_T,
-q_t - q_xxx + 6pv_x = v 1_𝒪 in Q_T,
p(t,0) = p(t,L) = p_x(t,L) = 0 for t∈ (0,T),
q(t,0) = q(t,L) = q_x(t,0) =0 for t∈ (0,T) ,
p(T)=0, q(T)=0 in (0,L),
and we have the following result.
A pair of control functions (h_1,h_2)∈ [L^2((0,T)×ω)]^2 verifies the insensitivity condition (<ref>) for the sentinel (<ref>) if and only if the associated solution to (<ref>)–(<ref>) satisfies
(p(0),q(0))=(0,0) in (0,L).
In what follows, we only focus on studying the controllability properties for the 4× 4 forward-backward system (<ref>)–(<ref>). Indeed, we prove the following theorem which is the main result of our paper.
Assume that ∩ω≠∅ and u_0≡ v_0≡ 0. Then, there exist constants C>0 and δ>0
such that for any (ξ_1,ξ_2)∈ [L^2(Q_T)]^2 verifying
e^C/t(ξ_1, ξ_2)_[L^2(Q_T)]^2≤δ ,
there exist control functions (h_1,h_2)∈ [L^2((0,T)×ω)]^2 such that the solution (u,v,p,q) to (<ref>)–(<ref>) satisfies p(0)=q(0)=0 in (0,L).
As usual, to prove <Ref>, one needs to first establish a global null-controllability result for the linearized (around zero) model associated to (<ref>)–(<ref>). More precisely, we consider the following system:
u_t -1/2u_xxx = h_1 1_ω + f_1 in Q_T,
v_t + v_xxx = h_2 1_ω + f_2 in Q_T,
u(t,0) = u(t,L) = u_x(t,0) = 0 for t∈ (0,T),
v(t,0) = v(t,L) = v_x(t,L) =0 for t∈ (0,T) ,
u(0) = u_0, v(0) = v_0 in (0,L) ,
-p_t + 1/2p_xxx = u 1_𝒪 + f_3 in Q_T,
-q_t - q_xxx = v 1_𝒪 + f_4 in Q_T,
p(t,0) = p(t,L) = p_x(t,L) = 0 for t∈ (0,T),
q(t,0) = q(t,L) = q_x(t,0) =0 for t∈ (0,T) ,
p(T)=0, q(T)=0 in (0,L).
with given right hand sides (f_1, f_2, f_3,f_4) from some certain space (specified later).
Note that, the controls (h_1 1_ω,h_21_ω) act directly in the equations of (u,v) while the equations of (p,q) are indirectly controlled via the couplings (u1_, v1_). At this point, one can observe that the condition ∩ω≠∅ is necessary to obtain the required
null-controllability result for the extended system (<ref>)–(<ref>), in other words, the insensitizing property for the main system (<ref>).
As we know, proving the null-controllability of (<ref>)–(<ref>) is equivalent to determine the observability property of its adjoint system, which is given by
-η_t +1/2η_xxx = ζ1_𝒪 + g_1 in Q_T,
-ψ_t - ψ_xxx = θ1_𝒪 + g_2 in Q_T,
η(t,0) = η(t,L) = η_x(t,L) = 0 for t∈ (0,T),
ψ(t,0) = ψ(t,L) = ψ_x(t,0) =0 for t∈ (0,T) ,
η(T) = 0, ψ(T) = 0 in (0,L) ,
ζ_t - 1/2ζ_xxx = g_3 in Q_T,
θ_t + θ_xxx = g_4 in Q_T,
ζ(t,0) = ζ(t,L) = ζ_x(t,0) = 0 for t∈ (0,T),
θ(t,0) = θ(t,L) = θ_x(t,L) =0 for t∈ (0,T) ,
ζ(0)=ζ_0, θ(0)=θ_0 in (0,L) .
with given (ζ_0, θ_0)∈ [L^2(0,L)]^2 and source terms (g_1,g_2,g_3,g_4) from some suitable space, specified later.
The problem amounts to establish a suitable Carleman estimate satisfied by the state variables
of the 4× 4 system (<ref>)–(<ref>) with only two observation terms, namely η and ψ. We shall discuss it at length in Section <ref>.
§.§ Paper Organization
The paper is organized as follows.
Section <ref> contains the well-posedness results of the underlying coupled KdV systems. Then, in Section <ref> we prove a suitable Carleman estimate for the 4× 4 adjoint system (<ref>)–(<ref>).
The Carleman estimate (see <Ref>) then yields an observability inequality which is obtained in Subsection <ref>, precisely <Ref>. In Subsection <ref>, we establish the null-controllability of the linearized model (<ref>)–(<ref>), thanks to the appropriate observability inequality. Afterthat, in Section <ref>,
we prove the local null-controllability of the system (<ref>)–(<ref>), which is precisely the proof of <Ref>. Finally, we conclude our paper by providing several remarks in Section <ref>.
§.§ Notation
Throughout the paper, C>0 denotes a generic constant that may vary line to line and depend on , ω, L and T.
§ WELL-POSEDNESS RESULTS
§.§ Functional setting and well-posedness of single KdV equation
We start by introducing the following functional spaces:
X_0 := L^2(0,T; H^-2(0,L)) , X_1 := L^2(0,T; H^2_0(0,L)) ,
X_0 := L^1(0,T; H^-1(0,L)) , X_1 := L^1(0,T; H^3(0,L) ∩ H^2_0(0,L)) ,
and
Y_0 := L^2(0,T; L^2(0,L)) ∩^0([0,T]; H^-1(0,L)),
Y_1:= L^2(0,T; H^4(0,L)) ∩^0([0,T]; H^3(0,L) ) ,
which are equipped with their usual norms. For each μ∈ [0,1], we further define the interpolation spaces (see for instance <cit.>):
X_μ : = (X_0, X_1)_[μ] , X_μ : = (X_0, X_1)_[μ] , Y_μ : = (Y_0, Y_1)_[μ] .
In particular, we have
X_1/4 = L^2(0,T; H^-1(0,L)) , X_1/4 = L^1(0,T; L^2(0,L)) ,
Y_1/4 = L^2(0,T; H^1(0,L)) ∩^0([0,T]; L^2(0,L) ) ,
X_1/2 = L^2(0,T; L^2(0,L)) , X_1/2 = L^1(0,T; H^1_0(0,L)) ,
Y_1/2 = L^2(0,T; H^2(0,L)) ∩^0([0,T]; H^1(0,L) ) .
Also, one can observe that for any ν∈ (0,1],
X_1/2 + ν/4 = L^2(0,T; H^ν(0,L)) , X_1/2 + ν/4 = L^1(0,T; H^1+ν(0,L)) ,
Y_1/2 + ν/4 = L^2(0,T; H^2+ν(0,L)) ∩^0([0,T]; H^1+ν(0,L) ) .
Let us now consider the single KdV equation given by
y_t ± y_xxx = f in Q_T,
y(t,0) = y(t,L) = y_x(t,L) = 0 for (0,T) ,
y(0,x) = y_0(x) in (0,L) .
with given source term f and initial data y_0.
We recall the following known results for (<ref>).
For given y_0∈ L^2(0,L) and f∈ F with F=X_1/4 or X_1/4, the system (<ref>) admits a unique solution y∈ Y_1/4, and in addition, there exists a constant C>0 such that
y_Y_1/4≤ C ( y_0_L^2(0,L) + f _F) .
For given y_0∈ H^3(0,L) with y_0(0)=y_0(L)=y_0^'(L)=0, and f∈ F with F=X_1 or X_1, the system (<ref>) admits a unique solution y∈ Y_1. In addition, there exists a constant C>0 such that
y_Y_1≤ C ( y_0_H^3(0,L) + f _F) .
Let y_0≡ 0 and μ∈ [1/4,1]. Then, for given f∈ F with F=X_μ or X_μ, the system (<ref>) admits a unique solution y∈ Y_μ, and moreover, there exists a constant C>0 such that
y_Y_μ≤ C f _F .
Note that the above results are also applicable for the adjoint equation to (<ref>) which is backward in time.
§.§ Well-posedness of the 4× 4 linearized system and its adjoint
We state the following results.
Let (u_0, v_0)∈ [L^2(0,L)]^2, (h_1, h_2) ∈ [L^2((0,T)×ω)]^2 and (f_1, f_2, f_3,f_4)∈ [F]^4 with F=X_1/4 or X_1/4 be given. Then, the system (<ref>)–(<ref>) possesses a unique solution (u,v,p,q)∈[Y_1/4]^4. In addition, there exists a constant C>0 such that
(u,v,p,q)_[Y_1/4]^4≤ C ( (u_0,v_0)_[L^2(0,L)]^2 + (h_1,h_2) _[L^2((0,T)×ω)]^2 + (f_1,f_2,f_3,f_4) _[F]^4) ,
where Y_1/4 is defined by (<ref>).
The proof of above proposition can be made in the following way. First, we use <Ref> to the set of equations (<ref>) to show that (u,v)∈[Y_1/4]^2 along with the estimate
(u,v)_[Y_1/4]^2≤ C ( (u_0,v_0)_[L^2(0,L)]^2 + (h_1,h_2) _[L^2((0,T)×ω)]^2 + (f_1,f_2) _[F]^2) .
Then, using (u1_, v1_) as source terms in the equations of (p,q) given by (<ref>), and combining with (<ref>) we get the required estimate (<ref>).
Similar result holds for the adjoint system (<ref>)–(<ref>).
Let (ζ_0, θ_0)∈ [L^2(0,L)]^2
and (g_1, g_2, g_3,g_4) ∈ [F]^4 with F=X_1/4 or X_1/4 be given. Then, the system (<ref>)–(<ref>) admits a unique solution (η,ψ,ζ,θ)∈[Y_1/4]^4 and moreover, there exists a constant C>0 such that
(η,ψ,ζ,θ)_[Y_1/4]^4≤ C ( (ζ_0,θ_0)_[L^2(0,L)]^2 + (g_1,g_2,g_3,g_4) _[F]^4) .
§.§ Well-posedness of the 4× 4 nonlinear system
Using a fixed point theorem, we now prove the well-posedness of our 4× 4 nonlinear system (<ref>)–(<ref>).
Let T>0 and L>0. Then, there exists some positive real number δ_0 such that for every (u_0,v_0)∈ [L^2(0,L)]^2, (h_1, h_2) ∈ [L^2((0,T)×ω)]^2 and (ξ_1, ξ_2) ∈ [L^2(Q_T)]^2, satisfying
(u_0,v_0)_[L^2(0,L)]^2 + (h_1,h_2) _[L^2((0,T)×ω)]^2 + (ξ_1,ξ_2) _[L^2(Q_T)]^2≤δ_0 ,
the system (<ref>)–(<ref>) possesses a unique solution
(u,v,p,q) ∈[ Y_1/4]^4 ,
where Y_1/4 is defined by (<ref>).
Before going to the proof of above proposition, we prove the following lemma.
Let y_1 , y_2∈ L^2(0,T; H^1(0,L)). Then, y_1 y_2,x∈ L^1(0,T; L^2(0,L)) and the map
(y_1, y_2) ∈ [L^2(0,T; H^1(0,L))]^2 ↦ y_1 y_2,x∈ L^1(0,T;L^2(0,L) )
is continuous.
Consider any (y_1, y_2) and (y_1, y_2) from the space [L^2(0,T; H^1(0,L))]^2. Then, we have
y_1y_2,x_L^1(0,T; L^2(0,L)) = ∫_0^T y_1 y_2,x_L^2(0,L)
≤∫_0^T y_1_L^∞(0,L)y_2,x_L^2(0,L)
≤ C_0 ∫_0^T y_1_H^1(0,L)y_2_H^1(0,L)
≤ C_0 y_1_L^2(0,T; H^1(0,L))y_2_L^2(0,T; H^1(0,L)),
for some constant C_0>0,
which yields the first result of the lemma.
Next, we compute that
y_1y_2,x - _1 _2,x_L^1(0,T;L^2(0,L))
≤∫_0^T y_1 (y_2,x - _2,x )_L^2(0,L)
+ ∫_0^T (y_1 - _1 ) _2,x_L^2(0,L)
≤∫_0^T y_1 _L^∞(0,L)y_2,x - _2,x_L^2(0,L) + ∫_0^T y_1 - _1_L^∞(0,L)_2,x_L^2(0,L)
≤∫_0^T y_1 _H^1(0,L)y_2 - _2_H^1(0,L) + ∫_0^T y_1 - _1_H^1(0,L)_2_H^1(0,L)
≤ C_1 (y_1_L^2(H^1) + _2_L^2(H^1)) ( y_1 - _1_L^2(H^1) + y_2 - _2_L^2(H^1)) ,
for some constant C_1>0.
This gives the continuity of the map (<ref>).
Therefore, the proof of <Ref> is complete.
We now prove the well-posedness of our extended system (<ref>)–(<ref>).
Let us define the map
Λ : [ Y_1/4]^4 →[ Y_1/4]^4 , Λ(u, v,p,q) = (u,v,p,q),
where (u,v,p,q) is the unique solution to (<ref>)–(<ref>) with (u_0, v_0)∈ [L^2(0,L)]^2, (h_1,h_2)∈ [L^2((0,T)×ω)]^2 and
f_1=ξ_1 + 3uu_x - 6 vv_x,
f_2=ξ_2 - 3uv_x,
f_3= 3pu_x - 3 qv_x , f_4 = -6pv_x .
Then, by means of <Ref> and the bound (<ref>), there exists some constant C_2>0 such that we have
(u,v,p,q)_[Y_1/4]^4≤ C_2 ( (u_0, v_0)_[L^2(0,L)]^2 + (h_1,h_2)_[L^2((0,T)×ω)]^2 + (ξ_1, ξ_2)_[L^2(Q_T)]^2
+ u^2_L^2(H^1) + v^2_L^2(H^1) + u_L^2(H^1)v_L^2(H^1) +
p_L^2(H^1)u_L^2(H^1)
+ q_L^2(H^1)v_L^2(H^1) + p_L^2(H^1)v_L^2(H^1)) .
Now, denote the set
_R:= { (u,v,p,q)∈[Y_1/4]^4 : u_Y_1/4 + v_Y_1/4 + p_Y_1/4 + q_Y_1/4≤ R } .
– Then, starting with (u, v,p,q) ∈_R, the estimate (<ref>) becomes
(u,v,p,q)_[Y_1/4]^4≤ C_2 ( (u_0, v_0)_[L^2(0,L)]^2 + (h_1,h_2)_[L^2((0,T)×ω)]^2
+ (ξ_1, ξ_2)_[L^2(Q_T)]^2 + 6R^2).
In what follows, if R<1/6C_2 and
(u_0, v_0)_[L^2(0,L)]^2 + (h_1,h_2)_[L^2((0,T)×ω)]^2
+ (ξ_1, ξ_2)_[L^2(Q_T)]^2 < R-6C_2 R^2/C_2,
we have Λ(_R)⊂_R, which concludes the stability of the map Λ given by (<ref>) on the set _R. The quantity δ_0 in (<ref>) can be now chosen as follows:
δ_0= R-6C_2 R^2/C_2.
– Let us prove that Λ is a contraction map. Consider two elements (u, v, p, q) and (u, v, p, q) from the space _R. We denote the associated solutions to the linearized model (<ref>)–(<ref>) by (u_1,v_1,p_1,q_1) and (u_2,v_2,p_2,q_2), respectively with
f_1=ξ_1 + 3uu_x - 6 vv_x,
f_2=ξ_2 - 3uv_x,
f_3= 3pu_x - 3 qv_x , f_4 = -6pv_x ,
and
f_1=ξ_1 + 3uu_x - 6 vv_x,
f_2=ξ_2 - 3uv_x,
f_3= 3pu_x - 3 qv_x , f_4 = -6pv_x ,
We further denote (u,v,p,q)= (u_1-u_2, v_1-v_2, p_1-p_2, q_1-q_2) which satisfies the following set of equations
u_t -1/2u_xxx = 3(uu_x - uu_x) - 6(vv_x - vv_x)
in Q_T,
v_t + v_xxx = - 3(uv_x - uv_x)
in Q_T,
u(t,0) = u(t,L) = u_x(t,0) = 0 for t∈ (0,T),
v(t,0) = v(t,L) = v_x(t,L) =0 for t∈ (0,T) ,
u(0) = 0, v(0) = 0 in (0,L) ,
-p_t + 1/2p_xxx = u 1_𝒪 + 3 (pu_x - pu_x) - 3 (qv_x - qv_x)
in Q_T,
-q_t - q_xxx = v 1_𝒪 - 6(pv_x - pv_x) in Q_T,
p(t,0) = p(t,L) = p_x(t,L) = 0 for t∈ (0,T),
q(t,0) = q(t,L) = q_x(t,0) =0 for t∈ (0,T) ,
p(T)=0, q(T)=0 in (0,L).
Thanks to the estimate (<ref>), we have
uu_x - uu_x_L^1(L^2)≤ 2C_1 (u_L^2(H^1) + u_L^2(H^1)) u - u_L^2(H^1) ,
vv_x - vv_x_L^1(L^2)≤ 2C_1 (v_L^2(H^1) + v_L^2(H^1)) v - v_L^2(H^1) ,
uv_x - uv_x_L^1(L^2)≤ C_1 (u_L^2(H^1) + v_L^2(H^1)) (u - u_L^2(H^1) + v - v_L^2(H^1)) ,
pu_x - pu_x_L^1(L^2)≤ C_1 (p_L^2(H^1) + u_L^2(H^1)) (p - p_L^2(H^1) + u - u_L^2(H^1)),
qv_x - qv_x_L^1(L^2)≤ C_1 (q_L^2(H^1) + v_L^2(H^1)) (q - q_L^2(H^1) + v - v_L^2(H^1)) ,
pv_x - pv_x_L^1(L^2)≤ C_1 (p_L^2(H^1) + v_L^2(H^1)) (p - p_L^2(H^1) + v - v_L^2(H^1)) ,
where the constant C_1>0 is the same as appeared in (<ref>).
Using the above information and by <Ref>, we can say that there exists some constant C_3>0 such that the solution to (<ref>)–(<ref>) satisfies
(u,v,p,q)_[Y_1/4]^4
≤ C_3 ( (u, v, p, q)_[L^2(H^1)]^4 + (u, v, p, q)_[L^2(H^1)]^4) (u, v, p, q) - (u, v, p, q)_[L^2(H^1)]^4
≤ 2C_3R (u, v, p, q) - (u, v, p, q)_[L^2(H^1)]^4
Now, choose R>0 in such a way that 2C_3 R<1, so that the map Λ is contracting. Therefore, using the Banach fixed point theorem, there exists a unique fixed point of Λ in _R, which is actually the unique solution (u,v,p,q) to (<ref>)–(<ref>).
The proof is complete.
§ CARLEMAN ESTIMATES
This section is devoted to obtain a suitable Carleman estimate satisfied by the solution to our adjoint system (<ref>)–(<ref>).
§.§ Choice of Carleman weights
Recall that ∩ω≠∅ and ω_0⊂⊂∩ω. Assume that ω_0=(l_0,l_1) with 0<l_0<l_1<L, and set l_1/2=(l_0+l_1)/2. We now consider the weight functions as introduced in <cit.> (see also <cit.>): for K_1, K_2>0 (to be specified later), define the smooth functions
β(x) = 1+K_1 (1-e^ -K_2(x- l_1/2 )^2) , ξ(t) = 1/t(T-t) , ∀ x∈ [0,L], ∀ t∈ (0,T),
and
(t,x) = ξ(t)β(x) , ∀ (t,x) ∈ (0,T) × [0,L].
For any K_1,K_2>0, we note that β >0 in [0,L] and consequently, >0 in (0,T) × [0,L]. We further observe that
there exists some c>0 such that |β_x| ≥ c > 0 in [0,L] ∖ω_0 ,
and β_x(0) <0 , β_x(L) >0 .
Also, one can choose K_1 and K_2 in such a way that
β_xx <0 in [0,L] ∖ω_0 .
Indeed, the property (<ref>) holds true if we set
K_2 =4/(l_1-l_0)^2.
We further consider
^*(t) : = min_[0,L](t,x) = ξ(t)β(l_1/2) = ξ(t), ∀ t ∈ (0,T),
(t) := max_[0,L](t,x) = ξ(t) (max{β(0), β(L) }), ∀ t ∈ (0,T) .
Now, denote
M(K_2, l_1/2) : = max{ 1- e^-K_2 l^2_1/2, 1- e^-K_2 (L-l_1/2)^2 } .
Then, there exists some constant c_0>0 such that the weight functions in (<ref>) verify the following criterion:
36 s ^*(t) - 35 s(t) ≥ c_0 s ξ(t) , ∀ t ∈ (0,T),
provided we choose 0 < K_1 < 1/35 M(K_2, l_1/2); in particular, we set
K_1 = 1/70 M(K_2, l_1/2) .
§.§ Carleman estimates for the single KdV equation
Let us prescribe a Carleman estimate for the following system
z_t ± z_xxx = g in Q_T,
z(t,0) = z(t,L) = z_x(t,0) = 0 for t∈ (0,T) ,
z(T) = z_T in (0,L) .
with given right hand side g and final data z_T. One can also consider the boundary conditions:
z(t,0) = z(t,L) = z_x(t,L) = 0 for t∈ (0,T) ,
as a replacement for the set of boundary conditions in (<ref>), and this will not affect the underlying Carleman estimate.
We hereby recall a Carleman estimate for the linear KdV equation which has been obtained for instance in <cit.>; see also <cit.>.
Let T>0 be given and ω_0⊂ (0,L) be a non-empty open set as introduced in Section <ref>. Then, there exist constants C>0 and s_0>0 such that for any g∈ L^2(Q_T) and z_T ∈ L^2(0,L), the solution z to (<ref>)
satisfies
s^5 ∬_Q_T e^-2sξ^5 |z|^2 + s^3 ∬_Q_T e^-2sξ^3 |z_x|^2 + s ∬_Q_T e^-2sξ |z_xx|^2
≤ C (∬_Q_T e^-2s |g|^2 + ∫_0^T ∫_ω_0 e^-2s[ s^5ξ^5 |z|^2 + sξ |z_xx|^2 ] ) ,
for all s≥ s_0.
Now by using the above proposition, we can obtain a modified Carleman inequality for (<ref>) with more regular right hand side g; see (<ref>) below. Although, similar result has already been addressed for instance in <cit.>, we give a sketch of the proof for sake of completeness. More precisely, we prove the following proposition.
Let T>0 be given and ω_0⊂ (0,L) be a non-empty open set as introduced in Section <ref>. Also, assume that ν∈ (0,1]. Then, there exist constants C>0 and s_0>0 such that for any g∈ L^2(0,T; H^ν(0,L)) and z_T ∈ L^2(0,L), the solution z to (<ref>) satisfies
s^5 ∬_Q_T e^-2sξ^5 |z|^2 + s^3 ∬_Q_T e^-2sξ^3 |z_x|^2
+ s ∬_Q_T e^-2sξ |z_xx|^2
+ s∫_0^T e^-2sξ^-3z^2_H^2+ν(0,L)
≤
C s^3 ∬_Q_T e^-2sξ |g|^2 + C s∫_0^T e^-2sξ^-3g^2_H^ν(0,L)
+ C s^5 ∫_0^T ∫_ω_0 e^-2sξ^5 |z|^2 + C s ∫_0^T ∫_ω_0 e^-2s (1+ 2/ν) ^* + 4s/ν ξ^1+ 8/ν |z|^2 ,
for all s≥ s_0.
Let us recall the definition of ^* from (<ref>), so that we can rewrite the Carleman estimate (<ref>) as follows:
s^5 ∬_Q_T e^-2sξ^5 |z|^2 + s^3 ∬_Q_T e^-2sξ^3 |z_x|^2 + s ∬_Q_T e^-2sξ |z_xx|^2
≤ C (∬_Q_T e^-2s |g|^2 + s^5 ∫_0^T ∫_ω_0 e^-2sξ^5 |z|^2 + s ∫_0^T ∫_ω_0 e^-2s^*ξ |z_xx|^2 ) ,
for all s≥ s_0.
Our aim is to eliminate the observation integral of z_xx by applying the so-called bootstrap argument. Denote
J := s ∫_0^T ∫_ω_0 e^-2s ^*ξ |z_xx|^2 .
Thanks to the fact that ξ and ^* do not depend on space variable, we have
J ≤ s ∫_0^T e^-2s ^*ξ z_H^2(ω_0)^2 .
Let ν∈ (0,1]. Writing H^2(ω_0) as an interpolation between the spaces H^2+ν(ω_0) and L^2(ω_0), and then applying the Young's inequality, we eventually get
J ≤ s∫_0^T e^-2s ^*ξ z ^4/(2+ν)_H^2+ν(ω_0) z ^2ν/(2+ν)_L^2(ω_0)
≤ϵ s ∫_0^T e^-2sξ^-3z^2_H^2+ν(ω_0) + C_ϵ s ∫_0^T
e^-2s (1+ 2/ν) ^* + 4s/ν ξ^1+ 8/νz^2_L^2(ω_0) ,
for any chosen ϵ>0.
Now we need to determine a proper estimate for the first term in the r.h.s. of (<ref>). Following the same argument as developed in <cit.> (see also <cit.>), let us employ the bootstrap technique. We define z(t,x):= ρ_1(t) z(t,x) with ρ_1(t)= s^1/2ξ^1/2 e^-s so that z satisfies
z_t ±z_xxx = g_1:= ρ_1 g + ρ_1,t z in Q_T,
z(t,0) = z(t,L) = z_x(t,0) = 0 for t∈ (0,T),
z(T) = 0 .
Note that
|ρ_1,t| ≤ C s^3/2ξ^5/2 e^-s.
Using this, we deduce
g_1 ^2_L^2(Q_T)≤ C s∬_Q_T e^-2sξ |g|^2 + C s^3 ∫_Q_T e^-2sξ^5 |z|^2 ,
and therefore, by virtue of <Ref>, we have z∈ Y_1/2, where the space Y_1/2 is introduced in (<ref>).
In particular, one has
z^2_L^2(0,T; H^2(0,L))≤
C g_1 ^2_L^2(Q_T) .
Next, we define z(t,x)= ρ_2(t) z(t,x) with ρ_2(t)= s^1/2ξ^-3/2e^-s. Then z satisfies
z_t ±z_xxx = g_2: = ρ_2 g + ρ_2,tρ^-1_1z in Q_T,
z(t,0) = z(t,L) = z_x(t,0) = 0 for t∈ (0,T),
z(T) = 0 .
Let us compute that
|ρ_2,tρ^-1_1| ≤ Cs.
Then, thanks to the fact that z∈ Y_1/2 and g∈ L^2(0,T; H^ν(0,L)), one has g_2 ∈ L^2(0,T; H^ν(0,L)) (=X_1/2 +ν/4 as defined by (<ref>)).
As a consequence, by <Ref> we have
z∈ Y_1/2 +ν/4 = L^2(0,T; H^2+ν(0,L)) ∩^0([0,T]; H^1+ν(0,L)),
with its estimate
z^2_L^2(H^2+ν) ∩ L^∞(H^1+ν) ≤ C g_2 ^2_L^2(H^ν)
≤ C s ∫_0^T e^-2sξ^-3g^2_H^ν(0,L) + C s^2 z^2_L^2(0,T; H^2(0,L)) .
Thereafter, from (<ref>), (<ref>) and (<ref>) we obtain
s∫_0^T e^-2sξ^-3z^2_H^2+ν(0,L)
≤ C s ∫_0^T e^-2sξ^-3g^2_H^ν(0,L) + C s^3 ∬_Q_Tξ e^-2s |g|^2 + C s^5 ∬_Q_T e^-2sξ^5 |z|^2 .
Utilizing the above estimate in (<ref>) and combining with (<ref>), (<ref>) and (<ref>) we get the desired Carleman estimate (<ref>) by choosing ϵ>0 small enough.
The proof is finished.
§.§ Carleman estimate for the 4× 4 adjoint system
We are now in position to prove a Carleman estimate for the 4× 4 adjoint system (<ref>)–(<ref>). Consider ν=1 in <Ref> and denote
I(z, s)
: = s^5 ∬_Q_T e^-2sξ^5 |z|^2 + s^3 ∬_Q_T e^-2sξ^3 |z_x|^2 + s ∬_Q_T e^-2sξ |z_xx|^2
+ s∫_0^T e^-2sξ^-3z^2_H^3(0,L) ,
so that the inequality (<ref>) becomes
I(z,s)
≤ C s^3 ∬_Q_T e^-2sξ |g|^2 + C s∫_0^T e^-2sξ^-3g^2_H^1(0,L)
+ C s^5 ∫_0^T ∫_ω_0 e^-2sξ^5 |z|^2 + C s ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9 |z|^2
≤ C s^3 ∬_Q_T e^-2sξ |g|^2 + C s∫_0^T e^-2sξ^-3g^2_H^1(0,L)
+C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9 |z|^2 ,
for all s≥ s_0.
Let us state and prove the main result of the current section. From now onwards, we shall consider the source terms g_i ∈ L^2(0,T; H^1_0(0,L)) for each i∈{1,2,3,4} in the adjoint system (<ref>)–(<ref>).
Let T>0 be given and ω_0⊂ (0,L) be a non-empty open set as introduced in Section <ref>. Then there exist constants C>0 and s_0:=s_0(T)>0 such that for any given source terms g_i∈ L^2(0,T; H^1_0(0,L)) for i=1,2,3,4, and (ζ_0,θ_0) ∈ [L^2(0,L)]^2, the solution to (<ref>)–(<ref>) satisfies
I(η,s) + I(ψ, s) + I(ζ, s) + I(θ, s)
≤
C s^5 ∬_Q_T e^-12s^*+ 10sξ^13(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C s^25∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |η|^2 + C s^25∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |ψ|^2 ,
for all s≥s_0, where I(·, s) has been defined in (<ref>).
We apply <Ref>, more specifically the estimate (<ref>) to each of the states η, ψ, ζ and θ of the adjoint system (<ref>)–(<ref>).
In what follows, η satisfies
I(η, s) ≤
C s^3 ∬_Q_T e^-2sξ(|ζ|^2 + |g_1|^2)
+ C s∫_0^T e^-2sξ^-3( ζ^2_H^1(0,L) + g_1^2_H^1(0,L))
+C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9 |η|^2
≤
C s^3 ∬_Q_T e^-2sξ(|ζ|^2 + |g_1|^2)
+ C s∬_Q_T e^-2sξ^-3( |ζ_x|^2 + |g_1,x|^2 )
+ C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9 |η|^2 ,
for all s≥ s_0, where we have used the Poincaré inequality since we have ζ∈ L^2(0,T;H^1_0(0,L)) and the source term g_1 belongs to L^2(0,T; H^1_0(0,L)).
Similarly, ψ, ζ and θ satisfy
I(ψ, s)
≤ C s^3 ∬_Q_T e^-2sξ(|θ|^2 + |g_2|^2)
+ C s∬_Q_T e^-2sξ^-3( |θ_x|^2 + |g_2,x|^2 )
+ C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9 |ψ|^2 ,
I(ζ, s)
≤ C s^3 ∬_Q_T e^-2sξ |g_3|^2
+ C s∬_Q_T e^-2sξ^-3
|g_3,x|^2
+ C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9 |ζ|^2 ,
I(θ, s)
≤ C s^3 ∬_Q_T e^-2sξ |g_4|^2
+ C s∬_Q_T e^-2sξ^-3
|g_4,x|^2
+ C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9 |θ|^2 ,
for all s≥ s_0.
I. Absorbing the lower order terms.
Thanks to the fact 1≤ 2 T^2 ξ and by adding (<ref>), (<ref>), (<ref>) and (<ref>), we obtain
I(η,s) + I(ψ, s) + I(ζ, s) + I(θ, s)
≤ C s^3 ∬_Q_T e^-2sξ(|g_1|^2 + |g_2|^2 + |g_3|^2 + |g_4|^2 )
+ C s ∬_Q_T e^-2sξ^-3(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 ) + C s^3 T^8 ∬_Q_T e^-2sξ^5 (|ζ|^2 + |θ|^2 )
+ C s T^12∬_Q_T e^-2sξ^3(|ζ_x|^2 + |θ_x|^2 )
+ C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4s ξ^9(|η|^2+ |ψ|^2 + |ζ|^2 + |θ|^2 ) .
Then there is some c_0>0 such that by taking s≥s_0:= c_0(T^4+T^6), we can absorb the integrals C s^3 T^8 ∬_Q_T e^-2sξ^5 (|ζ|^2 + |θ|^2 ) and C s T^12∬_Q_T e^-2sξ^5 (|ζ_x|^2 + |θ_x|^2 ) by the corresponding leading terms in the left hand side of (<ref>).
Recall that g_i∈ L^2(0,T; H^1_0(0,L)) for i∈{1,2,3,4}, and thus by employing the Poincaré inequality, we get
∬_Q_T e^-2sξ |g_i|^2 ≤∬_Q_T e^-2s^*ξ |g_i|^2 ≤ C ∬_Q_T e^-2s^*ξ |g_i,x|^2 .
As a consequence of the above information, the inequality (<ref>) now reduces to
I(η,s) + I(ψ, s) + I(ζ, s) + I(θ, s)
≤ C s^3 ∬_Q_T e^-2s^*ξ(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C s^5 ∫_0^T ∫_ω_0 e^-6s ^* + 4sξ^9(|η|^2+ |ψ|^2 + |ζ|^2 + |θ|^2 ) ,
for all s≥s_0.
II. Absorbing the unusual observation terms. In this part, we eliminate the observation integrals associated to ζ and θ.
∙ Consider a nonempty open set ω⊂⊂ω_0 and a function
ϕ∈^∞_c(ω_0) such that 0≤ϕ≤ 1 in ω_0 and ϕ=1 in ω. In fact, one can obtain the estimate (<ref>) with the observation domain ω. From the equation of η, that is (<ref>)_1, we have
ζ = -η_t + 1/2η_xxx - g_1 in 𝒪 (consequently in ω_0) ,
which yields
s^5 ∫_0^T ∫_ω e^-6s ^* + 4sξ^9 |ζ|^2
≤ s^5 ∫_0^T ∫_ω_0ϕ e^-6s ^* + 4s ξ^9ζ(-η_t + 1/2η_xxx - g_1 )
:= J_1 + J_2 + J_3 .
We now estimate the terms J_1, J_2 and J_3.
(i) Estimate of J_1.
Integrating by parts with respect to time t, the term J_1 becomes
J_1 = s^5 ∫_0^T ∫_ω_0ϕ(e^-6s ^* + 4s ξ^9)_t ζη + s^5 ∫_0^T ∫_ω_0ϕ e^-6s ^* + 4sξ^9ζ_t η
:= J_1,1 + J_1,2.
Using the fact
| (e^-6s ^* + 4sξ^9)_t | ≤ C T s e^-6s ^* + 4sξ^11 ,
and the Young's inequality, we first deduce that
|J_1,1| ≤ C T s^6 ∫_0^T ∫_ω_0 e^-6s ^* + 4sξ^11 |ζη |
≤ϵ s^5 ∬_Q_T e^-2sξ^5 |ζ|^2 + C_ϵ s^7 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17 |η|^2
for given ϵ>0.
We now recall the equation of ζ from (<ref>)_1; by performing an integrating by parts in space, we have
J_1,2 =
s^5 ∫_0^T ∫_ω_0ϕ e^-6s ^* + 4sξ^9η(1/2ζ_xxx + g_3 )
= - 1/2s^5 ∫_0^T ∫_ω_0ϕ e^-6s ^* + 4sξ^9η_x ζ_xx
- 1/2s^5 ∫_0^T ∫_ω_0ϕ_x e^-6s ^* + 4sξ^9ηζ_xx
+ s^5 ∫_0^T ∫_ω_0ϕ e^-6s ^* + 4sξ^9η g_3
: = J^1_1,2 + J^2_1,2 + J^3_1,2 .
Using the Young's inequality, we compute
|J^1_1,2| ≤ϵ s ∬_Q_T e^-2sξ |ζ_xx|^2 + C_ϵ s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10sξ^17 |η_x|^2
| J^2_1,2 |
≤ϵ s ∬_Q_T e^-2sξ |ζ_xx|^2 + C_ϵ s^9 ∫_0^T ∫_ω_0 e^-12 s ^* + 10 sξ^17 |η|^2 ,
for any given ϵ>0.
We also have
| J^3_1,2 |
≤ϵ s^5 ∬_Q_T e^-2sξ^5 |η|^2 + C_ϵ s^5 ∬_Q_T e^-12 s ^* + 10 sξ^13 |g_3|^2 ,
for any ϵ>0.
Combining (<ref>), (<ref>), (<ref>), (<ref>) in (<ref>), we obtain
|J_1| ≤ϵ s^5 ∬_Q_T e^-2sξ^5 |ζ|^2 + 2ϵ s ∬_Q_T e^-2sξ |ζ_xx|^2 + ϵ s^5 ∬_Q_T e^-2sξ^5 |η|^2
+ C_ϵ s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17( |η|^2 +|η_x|^2 ) + C s^5 ∬_Q_T e^-12 s ^* + 10 sξ^13 |g_3|^2 .
(ii) Estimate of J_2. We recall the term J_2 from (<ref>). Integration by parts in space, we get
J_2 = - 1/2s^5 ∫_0^T ∫_ω_0ϕ e^-6s^* + 4sξ^9 ζ_x η_xx - 1/2s^5 ∫_0^T ∫_ω_0ϕ_x e^-6s^* + 4sξ^9 ζη_xx ,
and then by applying the Young's inequality we have
|J_2| ≤ϵ s^3 ∬_Q_T e^-2sξ^3 |ζ_x|^2 + ϵ s^5 ∬_Q_T e^-2sξ^5 |ζ|^2 +
C_ϵ s^7 ∫_0^T ∫_ω_0 e^-12s^* + 10sξ^15 |η_xx|^2 ,
for given ϵ>0.
(iii) Estimate of J_3. Finally, the term J_3 can be estimated as follows:
|J_3| ≤ϵ s^5 ∬_Q_T e^-2sξ^5 |ζ|^2 + C_ϵ s^5 ∬_Q_T e^-12s^* + 10sξ^13 |g_1|^2 .
Combining the estimates of J_1, J_2 and J_3 obtained in (<ref>), (<ref>) and (<ref>) (respectively), we acquire that the observation integral (<ref>) of ζ satisfies:
s^5 ∫_0^T ∫_ω e^-6s ^* + 4sξ^9 |ζ|^2
≤ϵ s^5 ∬_Q_T e^-2sξ^5 (3 |ζ|^2 + |η|^2 ) + ϵ s^3 ∬_Q_T e^-2sξ^3 |ζ_x|^2 + 2ϵ s∬_Q_T e^-2sξ |ζ_xx|^2
+ C_ϵ s^5 ∬_Q_T e^-12s^* + 10sξ^13(|g_1|^2 + |g_3|^2 ) +
C_ϵ s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17 |η|^2
+ C_ϵ s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17( |η_x|^2 + |η_xx|^2 ),
for given ϵ>0.
∙ We apply similar process to handle the observation integral of θ. By expressing
θ = -ψ_t - ψ_xxx - g_2 in 𝒪 (consequently in ω_0)
from the equation (<ref>)_2, and following the same steps as previous, one can achieve
s^5 ∫_0^T ∫_ω e^-6s ^* + 4sξ^9 |θ|^2
≤ϵ s^5 ∬_Q_T e^-2sξ^5 (3 |θ|^2 + |ψ|^2 ) + ϵ s^3 ∬_Q_T e^-2sξ^3 |θ_x|^2 + 2ϵ s∬_Q_T e^-2sξ |θ_xx|^2
+ C_ϵ s^5 ∬_Q_T e^-12s^* + 10sξ^13(|g_2|^2 + |g_4|^2 ) +
C_ϵ s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17 |ψ|^2
+ C_ϵ s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17( |ψ_x|^2 + |ψ_xx|^2 ),
for any ϵ>0.
Fix ϵ>0 small enough in (<ref>) and (<ref>), so that the integrals of ζ, η, ζ_x, ζ_xx, θ, ψ, θ_x and θ_xx in Q_T can be absorbed by the associated leading integrals in the left hand side of (<ref>).
∙ Now, it remains to find proper estimates for the observation integrals concerning η_x, η_xx in (<ref>), and ψ_x, ψ_xx in (<ref>).
Indeed, writing the space H^2(ω_0) as an interpolation between H^3(ω_0) and L^2(ω_0), we find that
s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17( |η_x|^2 + |η_xx|^2 )
≤ s^9 ∫_0^T e^-12s^* + 10 sξ^17η^2_H^2(ω_0)
≤ s^9 ∫_0^T e^-12s^* + 10 sξ^17η^4/3_H^3(ω_0)η^2/3_L^2(ω_0)
≤ε s ∫_0^T e^-2sξ^-3η^2_H^3(0,L) + C_ε s^25∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |η|^2 ,
for any given ε>0.
Similar technique will provide
s^9 ∫_0^T ∫_ω_0 e^-12s^* + 10 sξ^17( |ψ_x|^2 + |ψ_xx|^2 )
≤ε s ∫_0^T e^-2sξ^-3ψ^2_H^3(0,L) + C_ε s^25∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |ψ|^2 .
Then, a proper choice of ε>0 helps to get rid of the first integrals in the right hand sides of (<ref>) and (<ref>) by means of the leading terms s ∫_0^T e^-2sξ^-3η^2_H^3(0,L) and s ∫_0^T e^-2sξ^-3ψ^2_H^3(0,L) in the left hand side of (<ref>).
As a consequence of the above analysis, the estimate (<ref>) boils down to the following:
I(η,s) + I(ψ, s) + I(ζ, s) + I(θ, s)
≤ C s^3 ∬_Q_T e^-2s^*ξ(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C s^5 ∫_Q_T e^-12 s^* + 10 sξ^13(|g_1|^2 + |g_2|^2 + |g_3|^2 + |g_4|^2 )
+ C s^25∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |η|^2 + C s^25∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |ψ|^2 ,
for all s≥s_0. To this end, due to the choices of g_i ∈ L^2(0,T; H^1_0(0,L)) for i∈{1,2,3,4}, and the definitions of ^*, given by (<ref>), we can easily conclude the required Carleman estimate (<ref>) for the adjoint system (<ref>)–(<ref>).
§ NULL-CONTROLLABILITY OF THE EXTENDED LINEARIZED SYSTEM
In this section, we establish the global null-controllability of the extended linearized system (<ref>)–(<ref>). As we have mentioned earlier, the main ingredient to prove the result is to first obtain a suitable observability inequality for the adjoint system (<ref>)–(<ref>), and we do this in the following subsection.
§.§ Observability inequality
Let us first construct some modified Carleman weights (from the existing ones (<ref>)–(<ref>)) that do not vanish at t=T. We consider
ℨ(t)=
1/t(T-t), 0 < t≤ T/2,
4/T^2, T/2 < t≤ T,
and
the weight function
𝔖(t,x) = ℨ(t) β(x) , ∀ (t,x) ∈ (0,T] × [0,L] ,
where the function β is introduced by (<ref>).
We further define
𝔖^*(t) = min_[0,L]𝔖(t,x) = ℨ(t)β(l_1/2) , ∀ t∈ (0,T)
𝔖(t) = max_[0,L]𝔖(t,x) = ℨ(t) ( max{β(0), β(L)}) , ∀ t∈ (0,T) .
Recall the former weight functins ξ, , and ^*, from (<ref>), (<ref>) and (<ref>) respectively. Then, by construction we observe that
ξ(t) = ℨ(t) , in (0,T/2] , 𝔖(t,x) = (t,x), in (0,T/2] × [0,L] ,
and further,
𝔖^*(t) = ^*(t) , 𝔖(t) = (t), in (0,T/2] .
With all these, we derive the following observability inequality associated to the adjoint system (<ref>)–(<ref>).
Let s be fixed in accordance with <Ref>. Then, there exists some constant C>0 that at most depends on s, T, ω and such that for any given source terms g_i ∈ L^2(0,T; H^1_0(0,L)) for i=1,2,3,4 and (ζ_0, θ_0)∈ [L^2(0,L)]^2, the solution to (<ref>)–(<ref>) satisfies the following estimate
ζ(T)^2_L^2(0,L) + θ(T)^2_L^2(0,L)
+ e^-s𝔖ℨ^1/2 (η, ψ, ζ, θ) ^2_[L^∞(0,T; L^2(0,L))]^4
+ ∬_Q_T e^-2 s𝔖ℨ^3 ( |η_x|^2 + |ψ_x|^2 + |ζ_x|^2 + |θ_x|^2
)
≤ C ∬_Q_T e^-12s𝔖^*+ 10s𝔖ℨ^13(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C ∫_0^T ∫_ω_0 e^-36s 𝔖^* + 34s 𝔖ℨ^57(|η|^2 + |ψ|^2 ).
The proof is made of two steps.
∙ Step 1.
Let us recall the definitions of weight functions , from (<ref>), (<ref>). Then, it is easy to observe from the Carleman estimate (<ref>) that the following inequality holds true:
∬_Q_T e^-2sξ^5 ( |η|^2 + |ψ|^2 + |ζ|^2 + |θ|^2
)
+ ∬_Q_T e^-2sξ^3 ( |η_x|^2 + |ψ_x|^2 + |ζ_x|^2 + |θ_x|^2
)
≤ C ∬_Q_T e^-12s^*+ 10sξ^13(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C ∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |η|^2 + C ∫_0^T ∫_ω_0 e^-36s^* + 34sξ^57 |ψ|^2 .
Now, choose a function γ∈^1([0,T]) such that
γ = 0 in [0, T/4], γ = 1 in [T/2, T] .
It is clear that Supp (γ^') ⊂ (T/4, T/2).
Consider the equations satisfied by (γη, γψ, γζ , γθ), namely
-(γη)_t +1/2(γη)_xxx = γζ1_𝒪 + γ g_1 - γ^'η in Q_T,
-(γψ)_t - (γψ)_xxx = γθ1_𝒪 + γ g_2 - γ^'ψ in Q_T,
(γη)(t,0) = (γη)(t,L) = (γη_x)(t,L) = 0 for t∈ (0,T),
(γψ)(t,0) = (γψ)(t,L) = (γψ_x)(t,0) =0 for t∈ (0,T) ,
(γη)(T) = 0, (γψ)(T) = 0 in (0,L)
(γζ)_t - 1/2(γζ)_xxx = γ g_3 + γ^'ζ in Q_T,
(γθ)_t + (γθ)_xxx = γ g_4 + γ^'θ in Q_T,
(γζ)(t,0) = (γζ)(t,L) = (γζ_x)(t,0) = 0 for t∈ (0,T),
(γθ)(t,0) = (γθ)(t,L) = (γθ_x)(t,L) =0 for t∈ (0,T) ,
(γζ)(0)=0, (γθ)(0)= 0 in (0,L) ,
where we have used the adjoint equations (<ref>)–(<ref>).
Applying <Ref> to (<ref>)–(<ref>)
we get
(γη, γψ, γζ , γθ) _[L^2(0,T; H^1_0(0,L))]^4 + (γη, γψ, γζ , γθ) _[L^∞(0,T; L^2(0,L))]^4
≤ C ((γ g_1, γ g_2, γ g_3, γ g_4)_[L^2(0,T;L^2(0,L) )]^4 + (γ^'η, γ^'ψ, γ^'ζ, γ^'θ)_[L^2(0,T; L^2(0,L))]^4) .
Thanks to the properties of γ introduced in (<ref>), we have from (<ref>) that
(η, ψ, ζ , θ) _[L^2(T/2,T; H^1_0(0,L))]^4 + ζ(T)_L^2(0,L) + θ(T)_L^2(0,L)
≤ C ((g_1, g_2, g_3, g_4)_[L^2(T/4,T;L^2(0,L) )]^4 + (η, ψ, ζ, θ)_[L^2(T/4,T/2; L^2(0,L))]^4) .
Note that the function e^-2 s𝔖ℨ^5 is bounded from below in [T/4,T/2] and therefore, in the right hand side of (<ref>) we find that
(η, ψ, ζ, θ)^2_[L^2(T/4,T/2; L^2(0,L))]^4
≤ C ∫_T/4^T/2∫_0^L e^-2s𝔖ℨ^5 ( |η|^2 + |ψ|^2 + |ζ|^2 + |θ|^2 )
≤ C ∬_Q_T e^-12s𝔖^*+ 10s𝔖ℨ^13(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C ∫_0^T ∫_ω_0 e^-36s 𝔖^* + 34s 𝔖ℨ^57(|η|^2 + |ψ|^2 ),
where we have used the fact that ℨ=ξ and 𝔖 = in [T/4,T/2] (see <Ref>) as well as the Carleman estimate (<ref>).
Also, we can incorporate the function e^-2 s𝔖ℨ^n for any n ∈ℕ^* (which is bounded from above in [T/2,T]), in the estimate of (η, ψ, ζ, θ)_[L^2(T/2,T; H^1_0(0,L))]^4 in the left hand side of (<ref>). This, combined with (<ref>), the estimate (<ref>) follows
ζ(T)^2_L^2(0,L) + θ(T)^2_L^2(0,L) + ∫_T/2^T ∫_0^L e^-2 s𝔖ℨ^5 ( |η|^2 + |ψ|^2 + |ζ|^2 + |θ|^2 )
+
∫_T/2^T ∫_0^L e^-2 s𝔖ℨ^3 ( |η_x|^2 + |ψ_x|^2 + |ζ_x|^2 + |θ_x|^2
)
≤ C ∬_Q_T e^-12s𝔖^*+ 10s𝔖ℨ^13(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C ∫_0^T ∫_ω_0 e^-36s 𝔖^* + 34s 𝔖ℨ^57(|η|^2 + |ψ|^2 ),
since g_i ∈ L^2(0,T; H^1_0(0,L)) for i=1,2,3,4.
On the other hand, since ℨ=ξ and 𝔖 = in (0,T/2] (<Ref>), we deduce that the
integrals
∫_0^T/2∫_0^L e^-2 s𝔖ℨ^5 ( |η|^2 + |ψ|^2 + |ζ|^2 + |θ|^2
) ,
and ∫_0^T/2∫_0^L e^-2 s𝔖ℨ^3 ( |η_x|^2 + |ψ_x|^2 + |ζ_x|^2 + |θ_x|^2
)
can be estimated by the same quantities appearing in the right hand side of (<ref>) (thanks to the Carleman estimate (<ref>)). As a consequence, we have
ζ(T)^2_L^2(0,L) + θ(T)^2_L^2(0,L)
+
∬_Q_T e^-2 s𝔖ℨ^5 ( |η|^2 + |ψ|^2 + |ζ|^2 + |θ|^2
)
+
∬_Q_T e^-2 s𝔖ℨ^3 ( |η_x|^2 + |ψ_x|^2 + |ζ_x|^2 + |θ_x|^2
)
≤ C ∬_Q_T e^-12s𝔖^*+ 10s𝔖ℨ^13(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C ∫_0^T ∫_ω_0 e^-36s 𝔖^* + 34s 𝔖ℨ^57(|η|^2 + |ψ|^2 ).
∙ Step 2. Let us define ρ(t)= e^-s𝔖ℨ^1/2 so that ρ(0)=0. Again, by applying <Ref> to the equations satisfied by
(ρη, ρψ, ρζ, ρθ),
we get
(ρη, ρψ, ρζ , ρθ) _[L^∞(0,T; L^2(0,L))]^4
≤ C ((ρg_1, ρg_2, ρg_3, ρg_4)_[L^2(0,T;L^2(0,L))]^4 + (ρ_t η, ρ_t ψ, ρ_t ζ, ρ_t θ)_[L^2(0,T; L^2(0,L))]^4) .
Note that
|ρ_t| ≤ C s e^-s𝔖ℨ^5/2 since |𝔖_t| ≤ C ℨ^2 ,
for some constant C>0, and therefore,
(ρ_t η, ρ_t ψ, ρ_t ζ, ρ_t θ)^2_[L^2(0,T; L^2(0,L))]^4
≤ C ∬_Q_T e^-2s𝔖ℨ^5(|η|^2 + |ψ|^2 + |ζ|^2 + |θ|^2 )
≤ C ∬_Q_T e^-12s𝔖^*+ 10s𝔖ℨ^13(|g_1,x|^2 + |g_2,x|^2 + |g_3,x|^2 + |g_4,x|^2 )
+ C ∫_0^T ∫_ω_0 e^-36s 𝔖^* + 34s 𝔖ℨ^57(|η|^2 + |ψ|^2 ).
Using the above estimate in (<ref>), and combining with (<ref>), we get the desired observability inequality (<ref>).
The proof is complete.
§.§ Null-controllability
This subsection is devoted to prove the global null-controllability of the extended linearized system (<ref>)–(<ref>) with initial data (u_0,v_0)=(0,0) and source terms f_i∈ F with either F= L^1(0,T; L^2(0,L)) or F= L^2(0,T; H^-1(0,L)), for i=1,2,3,4. We mainly address the proof for (f_1, f_2, f_3, f_4)∈ [L^1(0,T; L^2(0,L))]^4 and then, for the case when (f_1, f_2, f_3, f_4)∈ [L^2(0,T; H^-1(0,L))]^4, we point out the main changes in the proof.
Denote the Banach space
ℰ := { (u,v , p, q, h_1, h_2) | e^6s𝔖^* - 5s 𝔖ℨ^-13/2 (u, v, p, q) ∈ [L^2(0,T ; H^-1(0,L))]^4 ,
e^18s𝔖^* - 17s 𝔖ℨ^-57/2(h_1 1_ω , h_2 1_ω) ∈ [L^2((0,T)×ω)]^2 ,
e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2 (u, v, p, q) ∈ [^0([0,T]; L^2(0,1))]^4
∩ [L^2(0,T; H^1_0(0,1))]^4 ,
e^s𝔖ℨ^-1/2(u_t - 1/2 u_xxx - h_1 1_ω) ∈ L^1(0,T; L^2(0,L)),
e^s𝔖ℨ^-1/2(v_t + v_xxx - h_2 1_ω) ∈ L^1(0,T; L^2(0,L)),
e^s𝔖ℨ^-1/2(-p_t + 1/2 p_xxx - u 1_) ∈ L^1(0,T; L^2(0,L)) ,
e^s𝔖ℨ^-1/2(-q_t - q_xxx - v 1_) ∈ L^1(0,T; L^2(0,L)),
p(T, ·)=q(T, ·)=0 in (0,L) },
and we prove the following null-controllability result.
Let s be fixed parameter according to <Ref> and f_1, f_2, f_3, f_4 be the functions satisfying
e^s𝔖ℨ^-1/2(f_1, f_2, f_3,f_4) ∈ [L^1(0,T; L^2(0,L))]^4 .
Then, there exist controls (h_1, h_2) and a solution (u, v, p, q) to (<ref>)–(<ref>) in the space ℰ such that we have
p(0)=q(0)=0 in (0,L).
We consider the space
𝒬_0 := { (η, ψ, ζ, θ) ∈ [^4(Q_T)]^4 | η(t,0) = η(t,L) = η_x(t,L) = 0 ,
(1/2η_xxx - ζ1_)|_{x=0} = (1/2η_xxx - ζ1_)|_{x=L} =0 ,
ψ(t,0) = ψ(t,L) = ψ_x(t,0)= 0 ,
(ψ_xxx + θ1_)|_{x=0} = (ψ_xxx + θ1_)|_{x=L} =0 ,
ζ(t,0)=ζ(t,L) = ζ_x(t,0)= ζ_xxx(t,0) = ζ_xxx(t,L) =0 ,
θ(t,0)=θ(t,L) = θ_x(t,L)= θ_xxx(t,0) = θ_xxx(t,L) =0 } ,
and define the bi-linear form
ℒ : 𝒬_0 ×𝒬_0 →ℝ ,
given by
ℒ( (η, ψ, ζ, θ), (η, ψ, ζ, θ ) )
= ∬_Q_T e^-12s𝔖^*+ 10s𝔖ℨ^13[ (-η_t + 1/2η_xxx - ζ1_)_x (-η_t + 1/2η_xxx - ζ1_)_x
+ (-ψ_t - ψ_xxx - θ1_)_x (-ψ_t - ψ_xxx - θ1_)_x
+ (ζ_t - 1/2ζ_xxx)_x (ζ_t - 1/2ζ_xxx)_x
+ (θ_t + θ_xxx)_x (θ_t + θ_xxx)_x ]
+ ∫_0^T ∫_ω_0 e^-36s𝔖^*+ 34s𝔖ℨ^57( ηη+ ψψ) .
We further define the linear operator ℓ : 𝒬_0 →ℝ which is given by
ℓ( (η, ψ, ζ, θ) )
= ⟨ f_1, η⟩_L^1(L^2), L^∞(L^2) + ⟨ f_2, ψ⟩_L^1(L^2), L^∞(L^2) +
⟨ f_3, ζ⟩_L^1(L^2), L^∞(L^2) + ⟨ f_4, θ⟩_L^1(L^2), L^∞(L^2) .
It is clear that (<ref>) defines an inner product since the observability inequality (<ref>) holds.
We denote by 𝒬, the closure of 𝒬_0 w.r.t. the norm ℒ(·,·)^1/2 and indeed it is an Hilbert space endowed with the inner product (<ref>). The linear functional ℓ is also bounded;
in fact, we see
| ⟨ f_1, η⟩_L^1(L^2), L^∞(L^2) + ⟨ f_2, ψ⟩_L^1(L^2), L^∞(L^2) +
⟨ f_3, ζ⟩_L^1(L^2), L^∞(L^2) + ⟨ f_4, θ⟩_L^1(L^2), L^∞(L^2)|
≤e^s𝔖ℨ^-1/2 (f_1, f_2, f_3,f_4) _[L^1(0,T; L^2(0,L))]^4×e^-s𝔖ℨ^1/2 (η, ψ, ζ,θ) _[L^∞(0,T; L^2(0,L))]^4
< +∞ ,
which is possible due to the choice (<ref>) and the observation estimate (<ref>).
Therefore, by Lax-Milgram's theorem, there exists unique (η, ψ, ζ, θ)∈𝒬×𝒬 which satisfies
ℒ( (η, ψ, ζ, θ), (η, ψ, ζ, θ) ) = ℓ( (η, ψ, ζ, θ) ) , ∀ (η, ψ, ζ, θ) ∈𝒬.
Now, we set
u = e^-12s𝔖^*+10s𝔖ℨ^13(-η_t + 1/2η_xxx - ζ1_)_xx,
v = e^-12s𝔖^*+10s𝔖ℨ^13(-ψ_t -ψ_xxx - θ1_)_xx,
p = e^-12s𝔖^*+10s𝔖ℨ^13(ζ_t - 1/2ζ_xxx)_xx,
q = e^-12s𝔖^*+10s𝔖ℨ^13(θ_t + θ_xxx)_xx,
and
h_1 = e^-36 s 𝔖^* + 34 s𝔖ℨ^57η1_ω , h_2 =e^-36 s 𝔖^* + 34 s𝔖ℨ^57ψ1_ω .
Let us find the following bound for u; we have
∫_0^T e^12s𝔖^* - 10 s 𝔖ℨ^-13u^2_H^-1(0,L)
= ∫_0^T e^12s𝔖^* - 10 s 𝔖ℨ^-13sup_ϑ_H^1_0 =1 |⟨u, ϑ⟩|^2_H^-1, H^1_0
= ∫_0^T e^12s𝔖^* - 10 s 𝔖ℨ^-13sup_ϑ_H^1_0 =1 | ⟨ e^-12s𝔖^* + 10 s 𝔖ℨ^13( -η_t + 1/2η_xxx - ζ1_)_xx
, ϑ⟩|^2_H^-1, H^1_0
≤ ∬_Q_T e^-12s𝔖^* + 10 s 𝔖ℨ^13|( -η_t + 1/2η_xxx - ζ1_)_x|^2
≤ ℒ( (η, ψ, ζ, θ) , (η, ψ, ζ, θ) ) < +∞ .
In a similar way, we can find the required bounds for v, p and q. Also, in a straightforward way we can show the boundedness of the control functions (h_1, h_2).
Eventually, we have the following bound:
e^6s𝔖^* - 5 s 𝔖ℨ^-13/2 (u , v, p, q) _[L^2(0,T; H^-1(0,L))]^4
+ e^18s𝔖^* - 17 s 𝔖ℨ^-57/2 (h_1, h_2) _[L^2((0,T)×ω)]^2 < +∞ ,
and this (u, v, p, q) is the unique solution to the linearized system (<ref>)–(<ref>) in the sense of transposition with the control functions h_1 and h_2. Moreover, by construction of solutions (<ref>) and (<ref>), it is clear that
p(0)=0, q(0)=0 in (0,L),
which is the required null-controllability result for our system (<ref>)–(<ref>).
Let us now define
(u^*, v^*, p^*, q^*): = e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2 ( u, v, p, q ) ,
so that it satisfies (u^*(0), v^*(0), p^*(0), q^*(0) )= (0,0,0,0). Indeed, we observe that (using the expression of p from (<ref>))
p^* = e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2 e^-12s𝔖^*+10s𝔖ℨ^13(ζ_t - 1/2ζ_xxx)_xx
= e^ 6 s 𝔖^* - 7 s 𝔖ℨ^-35/2(ζ_t - 1/2ζ_xxx)_xx,
so that by definitions of weight functions 𝔖^*, 𝔖 given by (<ref>), it is clear that p^*(0)=0. In a similar manner, we can show that this phenomenon holds for the functions u^*, v^* and q^* in (<ref>).
Let us now write the equations satisfied by (u^*, v^*, p^*, q^*), which are
u^*_t -1/2 u^*_xxx = h^*_1 1_ω + f^*_1 + ( e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2)_t u in Q_T,
v^*_t + v^*_xxx = h^*_2 1_ω + f^*_2 + (e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2)_t v in Q_T,
u^*(t,0) = u^*(t,L) = u^*_x(t,0) = 0 for t∈ (0,T),
v^*(t,0) = v^*(t,L) = v^*_x(t,L) =0 for t∈ (0,T) ,
u^*(0) = 0, v^*(0) = 0 in (0,L) ,
-p^*_t + 1/2p^*_xxx = u^* 1_𝒪 + f^*_3 - ( e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2)_t p in Q_T,
-q^*_t - q^*_xxx = v^* 1_𝒪 + f^*_4 - ( e^18s 𝔖^* - 17 s 𝔖ℨ^- 61/2)_t q in Q_T,
p^*(t,0) = p^*(t,L) = p^*_x(t,L) = 0 for t∈ (0,T),
q^*(t,0) = q^*(t,L) = q^*_x(t,0) =0 for t∈ (0,T) ,
p^*(T)=0, q^*(T)=0 in (0,L),
where (h^*_1, h^*_2) := e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2 (h_1 , h_2),
and they belong to [L^2((0,T)×ω)]^2, thanks to (<ref>).
Also, we have
(f^*_1, f^*_2, f_3^*, f_4^*): = e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2 (f_1 ,f_2, f_3, f_4) ∈ [L^1(0,T; L^2(0,L))]^4 ,
since e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2≤ C e^s𝔖ℨ^-1/2.
Beside the above, one can compute that
| ( e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2)_t | ≤ C e^18s 𝔖^* - 17 s 𝔖ℨ^-57/2 .
Consequenty,
| ( e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2)_t u|
≤ C | e^18s 𝔖^* - 17 s 𝔖ℨ^-57/2u|
≤ C e^12s(𝔖^* - 𝔖) ℨ^-44/2| e^6s 𝔖^* - 5 s 𝔖ℨ^-13/2u| ,
and then by using the fact 𝔖^* ≤𝔖 and (<ref>), we deduce ( e^18s 𝔖^* - 17 s 𝔖ℨ^-61/2)_t u∈ L^2(0,T; H^-1(0,L)). In a similar way, we can show that the same phenomenon holds true for the related terms in the right hand side of the equations satisfied by v^*, p^*, q^*.
Altogether, we have shown that each source term in the set of equations (<ref>)–(<ref>) belongs to the space L^2(0,T; H^-1(0,L)). As a result, by applying <Ref>, we have
(u^*, v^*, p^*, q^*) ∈ [^0([0,T]; L^2(0,L))]^4 ∩ [L^2(0,T; H^1_0(0,L) )]^4 ,
and moreover,
(u^*, v^*, p^*, q^*)_ [^0([0,T]; L^2(0,L))]^4 ∩ [L^2(0,T; H^1_0(0,L) )]^4 < +∞ .
Therefore, the functions (u, v, p, q, h_1, h_2) belong to the space ℰ defined by (<ref>).
The proof is complete.
Analogous null-controllability result
Recall that, we have proved the controllability result in <Ref> with the choices f_i ∈ L^1(0,T; L^2(0,L)) for i=1,2,3,4 verifying (<ref>). We can derive a similar control result when f_i ∈ L^2(0,T; H^-1(0,L)) with few changes in the proof.
More specifically, we now consider a Banach space
“ ℰ "
that contains the elements (u, v, p, q, h_1, h_2) that verify the first three conditions and last condition of (<ref>), with in addition
e^s𝔖ℨ^-3/2(u_t - 1/2 u_xxx - h_1 1_ω) ∈ L^2(0,T; H^-1(0,L)),
e^s𝔖ℨ^-3/2(v_t + v_xxx - h_2 1_ω) ∈ L^2(0,T; H^-1(0,L)),
e^s𝔖ℨ^-3/2(-p_t + 1/2 p_xxx - u 1_) ∈ L^2(0,T; H^-1(0,L)) ,
e^s𝔖ℨ^-3/2(-q_t - q_xxx - v 1_) ∈ L^2(0,T; H^-1(0,L)) .
In this case, we have the following result.
Let s be fixed parameter according to <Ref> and f_1, f_2, f_3, f_4 be the functions satisfying
e^s𝔖ℨ^-3/2(f_1, f_2, f_3,f_4) ∈ [L^2(0,T; H^-1(0,L))]^4 .
Then, there exists controls (h_1, h_2) and a solution (u, v, p, q) to (<ref>)–(<ref>) in the space ℰ such that we have
p(0)=q(0)=0 in (0,L).
The proof follows almost the same lines as the proof for <Ref> except we now consider
the linear functional ℓ : 𝒬_0 →ℝ as
ℓ( (η, ψ, ζ, θ) )
= ⟨ f_1, η⟩_L^2(H^-1), L^2(H^1_0) + ⟨ f_2, ψ⟩_L^2(H^-1), L^2(H^1_0) +
⟨ f_3, ζ⟩_L^2(H^-1), L^2(H^1_0) + ⟨ f_4, θ⟩_L^2(H^-1), L^2(H^1_0) .
This verifies
| ⟨ f_1, η⟩_L^2(H^-1), L^2(H^1_0) + ⟨ f_2, ψ⟩_L^2(H^-1), L^2(H^1_0) +
⟨ f_3, ζ⟩_L^2(H^-1), L^2(H^1_0) + ⟨ f_4, θ⟩_L^2(H^-1), L^2(H^1_0)|
≤e^s𝔖ℨ^-3/2 (f_1, f_2, f_3,f_4) _[L^2(0,T; H^-1(0,L))]^4×e^-s𝔖ℨ^3/2 (η, ψ, ζ,θ) _[L^2(0,T; H^1_0(0,L))]^4
≤e^s𝔖ℨ^-3/2 (f_1, f_2, f_3,f_4) _[L^2(0,T; H^-1(0,L))]^4×e^-s𝔖ℨ^3/2 (η_x, ψ_x, ζ_x,θ_x) _[L^2(0,T; L^2(0,L))]^4
< +∞ ,
thanks to the choices of f_i (i=1,2,3,4) in (<ref>) and the observation inequality (<ref>).
We skip the other details of the proof since those are similar with the proof of <Ref>.
§ LOCAL NULL-CONTROLLABILITY OF THE EXTENDED NONLINEAR SYSTEM
In this section, we prove the main theorem of our paper,
that is <Ref> and as explained in Section <ref>, this is equivalent to prove the local null-controllability of the extended system (<ref>)–(<ref>), which is precisely <Ref>.
To prove it, we use the following well-known result.
Let 𝒢_1, 𝒢_2 be two Banach spaces and : _1 →_2 be a map satisfying ∈^1(_1; _2). Assume that b_1∈_1, (b_1)=b_2∈_2 and ^'(b_1):_1→_2 is surjective. Then, there exists δ>0 such that for every b∈_2 satisfying b - b_2__2<δ, there exists a solution of the equation
(b)=b, b∈_1.
We apply <Ref> to prove the required local null-controllability result for the system (<ref>)–(<ref>). We take
_1 = ℰ , _2 = [ℱ]^4 ,
where ℰ is defined by (<ref>) and
ℱ : = { f | e^s 𝔖ℨ^-1/2 f ∈ L^1(0,T; L^2(0,L)) } .
Now, define the map 𝒴 : _1 →_2 such that
(u,v,p,q,h_1,h_2)
= ( u_t - 1/2 u_xxx - 3 uu_x + 6vv_x - h_1 1_ω , v_t + v_xxx + 3uv_x - h_21_ω,
-p_t + 1/2 p_xxx - 3pu_x + 3qv_x - u1_ , -q_t -q_xxx + 6pv_x - v1_) .
* Let us first check that ∈^1(_1; _2). In this regard, we denote
the space
𝒫 : = { y | e^18s 𝔖^* - 17 s𝔖ℨ^-61/2 y ∈ L^2(0,T; H^1_0(0,L)) } .
Then observe that, to prove ∈^1(_1; _2),
it is enough to show that the map
(y,z) ∈ [ 𝒫 ]^2 ↦ yz_x ∈ℱ
is continuous.
Recall the construction of weight functions 𝔖^*, 𝔖 in (<ref>). Then, as we have obtained (<ref>), one has
36 s 𝔖^* (t) - 35 s 𝔖 (t) ≥ c_0 s ℨ(t) ,
for all t∈ (0,T] and for some c_0>0.
Consequently,
e^ s 𝔖ℨ^-1/2≤ e^36 s 𝔖^*- 34 𝔖ℨ^-61× e^- c_0 s ℨ(t) ℨ^ 61-1/2≤ C e^36 s 𝔖^*- 34 𝔖ℨ^-61
for some constant C>0.
Using (<ref>), we now compute the following: for any two functions y, z ∈𝒫,
y z_x _ℱ = ∫_0^T e^s𝔖ℨ^-1/2yz_x _L^2(0,L)
≤ C ∫_0^T e^18 s 𝔖^*- 17 𝔖ℨ^-61/2y_L^∞(0,L) e^18 s 𝔖^*- 17 𝔖ℨ^-61/2z_x_L^2(0,L)
≤
C ∫_0^T e^18 s 𝔖^*- 17 𝔖ℨ^-61/2y_H^1_0(0,L) e^18 s 𝔖^*- 17 𝔖ℨ^-61/2z_H^1_0(0,L)
≤ C e^18 s 𝔖^*- 17 𝔖ℨ^-61/2 y _L^2(0,T; H^1_0(0,L)) e^18 s 𝔖^*- 17 𝔖ℨ^-61/2 z _L^2(0,T; H^1_0(0,L)),
and thus the continuity of the map (<ref>) follows.
Once we have (<ref>), it is not difficult to conclude that ∈^1(_1; _2).
* Next, we check that ^'(0,0,0,0,0) is surjective. In fact, we have
(0,0,0,0,0) = (0, 0, 0, 0), and ^'(0,0,0,0,0) : _1 → G_2 is given by
^'(0,0,0,0,0)(u,v,p,q,h_1,h_2)
= ( u_t - 1/2 u_xxx - h_1 1_ω , v_t + v_xxx - h_21_ω, -p_t + 1/2 p_xxx - u1_ , -q_t -q_xxx - v1_) ,
which is surjective due to the controllability result given by <Ref>.
Set b_1=(0,0,0,0,0), b_2 =(0,0,0,0) and consider
b=(ξ_1, ξ_2,0,0)∈_2, where (ξ_1,ξ_2) is the given external source term in (<ref>)–(<ref>) or in (<ref>). Then, according to <Ref>,
there is a δ>0 such that for given (ξ_1, ξ_2) verifying
(ξ_1, ξ_2,0,0)__2 < δ ,
there exists a solution-control pair (u,v,p,q,h_1,h_2)∈_1=ℰ to the system (<ref>)–(<ref>). In particular, p(0)=q(0)=0 in (0,L). This completes the proof of <Ref> which implies the proof for <Ref>.
The above local null-controllability result can also be derived by considering the space
ℱ = { f | e^s 𝔖ℨ^-3/2 f ∈ L^2(0,T; H^-1(0,L)) }
instead of (<ref>).
§ CONCLUDING REMARKS
In this work, we have proved the existence of insensitizing controls for a system of two KdV equations, notably known as Hirota-Satsuma system.
The insensitizing problem is reformulated to an equivalent null-control problem for the 4× 4 system
(<ref>)–(<ref>) with the action of only two localized controls.
To prove this result, we first established a Carleman estimate (see <Ref>) for the adjoint system (<ref>)–(<ref>) to the linearized problem (<ref>)–(<ref>). This helped us to find a suitable observability inequality, namely (<ref>), which lead to prove the global null-controllability of the linearized model (<ref>)–(<ref>). Finally, using the inverse mapping theorem, we conclude the local null-controllability of the system (<ref>)–(<ref>), and that implies the main result of this paper, i.e., <Ref>.
Let us now present some concluding remarks concerning the problem addressed in this work.
1. Choice of initial data. As in other insensitizing problems, the assumption on the zero initial condition in <Ref> (i.e., u_0=v_0=0) is roughly related to the fact that system (<ref>)–(<ref>) is composed by forward and backward equations.
In fact, it has already been indicated in the work <cit.> (see also <cit.>) that it is difficult to treat every initial data while studying the insensitizing control problems.
2. Condition on observation region. The assumption 𝒪∩ω≠∅ is essential to prove a suitable Carleman estimate and hence the observability inequality (see <Ref> and <Ref> in this paper), which are the main ingredients for the proof of <Ref>. However, in <cit.> the authors proved that in the simpler case of heat equation, this condition is not necessary if one deals with an ε-insensitizing problem (that is, |∂ J_τ(u,v)/∂τ|_τ=0|≤ε). This remains an open problem for our case.
3. Less observation term in sentinel functional. Recall that, we have considered the following sentinel functional in this paper,
J_τ (u,v) = 1/2∬_(0,T)× |u|^2 + 1/2∬_(0,T)× |v|^2 .
If we drop any one of the two observations, then the extended linearized system (<ref>)–(<ref>) will not be controllable anymore by the methodology used in this paper. For instance, by choosing the sentinel functional
J_τ (u,v) = 1/2∬_(0,T)× |u|^2
would lead the extended system as
u_t -1/2 u_xxx - 3uu_x + 6vv_x = h_1 1_ω + ξ_1 in Q_T,
v_t + v_xxx + 3uv_x = h_2 1_ω + ξ_2 in Q_T,
-p_t + 1/2 p_xxx - 3pu_x + 3qv_x = u1_ in Q_T,
-q_t + q_xxx + 6pv_x = 0 in Q_T,
with the same initial-boundary conditions as in (<ref>)–(<ref>). But the associated linearized model to the above system is not null-controllable with two controls (h_1, h_2) due to the lack of linear coupling in the equation of q. Thus, the choice of (<ref>) is somewhat necessary to study the insensitizing control property for our system (<ref>), at least by means of the strategy developed in this paper.
4. Reduction of control input. It would be a challenging task if we reduce the number of controls in the system (<ref>). As for instance, if we drop the action of h_2 in the second equation of (<ref>), it is quite clear that the extended linearized system (<ref>)–(<ref>) cannot be controllable since there is no influence of control force in the equations of v and q.
Therefore, dealing with only one control instead of two in the main system (<ref>) is really delicate and it needs further attention. Generally speaking, for the purpose of studying insensitizing control problems, we need to tackle with the controllability of an extended system where the number of controls are already less than the equations (in our case, we have two controls (h_1,h_2) acting in the 4× 4 system (<ref>)–(<ref>)).
Thus, reducing the number of controls in the main system would lead more obstacle to achieve the required controllability criterion for the extended system.
Similar phenomenon has also been addressed in <cit.> in the context of insensitizing problem for a coupled fourth- and second-order parabolic system, and in <cit.> in the framework of hierarchic control problems.
§ ACKNOWLEDGEMENT
This work is partially supported by the Czech-Korean project GAČR/22-08633J.
plain
|
http://arxiv.org/abs/2306.01892v1
|
20230602195312
|
A fully ab initio approach to inelastic atom-surface scattering
|
[
"Michelle M. Kelley",
"Ravishankar Sundararaman",
"Tomás A. Arias"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.other",
"quant-ph"
] |
Department of Physics, Cornell University, Ithaca, New York 14853, USA
Department of Materials Science & Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
Department of Physics, Cornell University, Ithaca, New York 14853, USA
We introduce a universal and fully ab initio theory for inelastic scattering of any atom from any surface, and apply the theory to helium scattering from Nb(100). The key aspect making our approach universal is a direct first-principles evaluation of the scattering atom–electron vertex. By correcting misleading results from current state-of-the-art theories, this fully ab initio approach will be critical in guiding and interpreting experiments that adopt next-generation, non-destructive atomic beam scattering.
A fully ab initio approach to inelastic atom–surface scattering
Tomás A. Arias
July 31, 2023
===============================================================
As opposed to scattering electrons or x-rays, atomic and molecular beams are non-destructive surface probes that allow for the study of increasingly sensitive and delicate samples, pushing the scientific limits of the types of surfaces that can be feasibly examined <cit.>. Such low-energy (<0.1 eV) beams of atoms—which do not react with, penetrate, or damage samples and characteristically scatter a few angstrom above a surface—open up opportunities to study wider classes of materials including fragile biological specimens, polymers, glass, topological materials and even meta-stable or reactive surfaces that would otherwise be inaccessible <cit.>. Modern advances in atomic scattering techniques include the recently invented helium-atom microscopy and helium spin-echo spectroscopy, where helium is a popular choice of scatterer for reasons such as its small mass and chemical inertness <cit.>. Despite the promise of these innovative methods, there remain challenges such as a much lower detection-efficiency (∼5–6 orders of magnitude less than electron energy loss scattering (EELS) <cit.>), which can be remedied by integrating over many beam pulses but requires re-cleaning and maintaining the surface throughout the integration process.
Theoretical predictions of atomic scattering signatures are critical. Such predictions are not only necessary to guide the experimental data-collecting process with its low detection-efficiency, but also to interpret the resulting measurements. As we will show below, existing semi-empirical theories are often misrepresentative—downplaying or completely missing distinctive features while overemphasizing others—which makes guiding experiments and identifying the fundamental underlying processes extremely challenging. Unfortunately, no fully ab initio method, which computes scattering directly from first principles, has yet been available to guide and interpret atom–surface scattering experiments.
Advances in the theory behind atom–surface scattering have mostly centered around developing different model potentials for the distorted-wave Born approximation (DWBA) <cit.>. One particularly important development came after the first observation of the anomalous phonon resonance <cit.>, which is now understood as a feature common to metallic surfaces <cit.>. The interpretation of this surface-phonon resonance established that helium atoms scatter off of the surface free-electron density as opposed to individual surface atoms, meaning inelastic atom–surface scattering contains information on how electron–phonon interaction manifest at surfaces <cit.>. A theory for inelastic helium-atom scattering (HAS) incorporating the underlying electron–phonon interactions was finally formulated in cutting-edge work from 2011 <cit.>, which proposed that inelastic HAS probabilities are approximately proportional to electron–phonon coupling (EPC) strengths λ_𝐪ν and led to an important sequence of papers <cit.>. Experimentally accessing these EPC strengths is important because these fundamental parameters quantify most properties of conventional superconductors <cit.>, including T_c <cit.>. However, the idea that inelastic HAS probabilities are proportional to λ_𝐪ν is an oversimplification. That proportionality would imply that the underlying helium–electron interactions can be trivially factored out when computing inelastic HAS probabilities, but this is not the case. Moreover, experiments show different scattering behaviors depending on the choice of probe particle <cit.>, a result which demands a universal theory capable of discerning subtle differences among distinct types of scattering species. A complete understanding of the physics encompassed in atom–surface scattering ultimately requires a fully ab initio framework to calculate explicit interactions between the probe atom and surface electrons. These interactions comprise a fundamental component in the scattering diagram (see Fig. <ref>) that have been mostly ignored until now and never before computed directly from first principles.
Here, we introduce an entirely ab initio framework for inelastic atom–surface scattering. This work provides a new approach to predict intensities produced in HAS experiments and reports the first ab initio evaluation of the helium atom–electron vertex from Fig. <ref>. We apply our method to Nb(100) and compare to previously published HAS measurements to demonstrate the validity of this new approach <cit.>. Additionally, we demonstrate the superiority of our approach over two lower levels of theory. The first level corresponds to the most commonly used simplification of the distorted-wave Born approximation. The second level gives the current state-of-the-art model relating HAS probabilities to EPC strengths <cit.>. To provide a more generous comparison, we slightly amend the framework of the latter method to avoid approximations to electron–phonon matrix elements and compute these interactions explicitly from first principles instead <cit.>. While this work focuses on helium, we emphasize that our approach can be easily applied to any species of scattering atom or molecule.
Theoretical framework.—From quantum scattering theory, the helium–electron vertex from Fig. <ref> corresponding to a helium atom at 𝐫' can be written as
h_m𝐤+𝐐,n𝐤(𝐫')=∫ d𝐫'ψ^†_n𝐤(𝐫)Δ V_He(𝐫,𝐫')ψ_m𝐤+𝐐(𝐫),
where Δ V_He(𝐫,𝐫') gives the perturbing potential from the addition of a helium atom at 𝐫', and 𝐫 denotes the electronic coordinate. Here, we adopt a common convention to specify lateral coordinates using capital letters, i.e. 𝐑≡ r_x x̂+r_yŷ and 𝐐≡ q_x x̂+q_yŷ.
Figure <ref> shows a contour plot of Δ V_He(𝐫,𝐫') for a helium atom at its estimated turning point from the Nb(100) surface. To gain insight on the extent of the helium–electron interaction, Fig. <ref> also shows the perturbing potential and density of Fermi-level electrons averaged over planes. Using Eq. (<ref>), the HAS matrix element corresponding to the scattering diagram in Fig. <ref> for a helium atom at 𝐫' becomes
M_𝐐ν^abs/em(𝐫')=∑_n,m∫ d𝐤/(2π)^3g_n𝐤,m𝐤+𝐐^𝐐νh_m𝐤+𝐐 ,n𝐤(𝐫')
× f_n𝐤-f_n𝐤+𝐐/ϵ_n𝐤-ϵ_n𝐤+𝐐± (ω_𝐐ν+iη),
where g_n𝐤,m𝐤+𝐐^𝐐ν gives the electron–phonon vertex, f_n𝐤 indicates a Fermi distribution for an electronic state with energy ϵ_n 𝐤, and the plus or minus sign in the denominator of the matrix element is for phonon absorption or emission, respectively. The expression in Eq. (<ref>) is analogous to the familiar expression for the phonon linewidth but with two distinctions: one electron–phonon vertex is replaced with a helium–electron vertex, and we now must consider the full complex expression rather than just the imaginary component of a self-energy diagram.
The total scattering matrix element integrates Eq. (<ref>) over the helium coordinate 𝐫', weighted by the helium atom's wavefunction
ℳ_𝐐ν,abs/em^𝐤_i,𝐤_f=∑_𝐆δ (𝐐+𝐆-Δ𝐊_fi)
× ∫_Ω d𝐫' Φ^*𝐤_f_He(𝐫') Φ^𝐤_i_He(𝐫') M_𝐐ν^abs/em(𝐫').
The total inelastic scattering probability for a helium atom (𝐤_i→𝐤_f) that absorbs or emits one phonon is ultimately calculated from
Absorption: |ℳ_𝐐ν,abs^𝐤_i,𝐤_f|^2× n(ω_𝐐ν)
Emission: |ℳ_𝐐ν,em^𝐤_i,𝐤_f|^2×(n(ω_𝐐ν)+1),
where n(ω_𝐐ν) provides the boson occupancy for the phonon mode involved in the collision.
Surfaces with low corrugation.—For surfaces with low corrugation like Nb(100), the lateral coordinate in Eq. (<ref>) factors out through M_𝐐ν(𝐫')=M_𝐐ν(z'), and the helium atom wavefunction can be approximated with Φ^𝐤_He(𝐫')≈ e^i 𝐊·𝐑'ϕ_𝐤(z').
Now, the integral over the lateral coordinate simplifies conveniently, reducing to sinc functions for systems of orthorhombic symmetry. As a result, the last remaining piece to evaluate is an integral of M_𝐐ν(z') over the z-coordinate of the helium atom.
The helium atom's z-coordinate is the central variable determining the atom's interaction with the Nb(100) surface. The black curve in the top-left panel of Fig. <ref> shows the interaction energy profile of a helium atom as a function of distance from the surface. We investigate the position-dependent HAS matrix element M_𝐐ν(z') by sampling a helium atom at five distances spaced uniformly from the surface (Fig. <ref>; violet vertical-lines) and compute the corresponding HAS signal intensity |M_𝐐ν(z')|^2 for each case (Fig. <ref>; bottom panel). Because helium scatters from the surface electron-density, the HAS matrix element will weaken as the helium atom recedes the surface. Indeed, we find a well-defined exponential decay constant of 3.8 Å^-1 for the position-dependent HAS signal (Fig. <ref>; top-right panel). The interaction clearly strengthens as helium approaches the surface, but the total scattering matrix element requires the helium-atom wavefunction to complete the integration in Eq. (<ref>).
The helium atom's z-wavefunction can be approximated by solving a one-dimensional Schrödinger equation with a potential imposed by the helium atom's interaction energy with the surface V(z')=Δ E_He(z'). Figure <ref> shows the interaction energy, example incoming and outgoing z-wavefunctions, and the integrand of HAS matrix element from Eq. (<ref>) as a function of distance from the surface. These results reveal that the HAS signal is dominated by the contribution of the helium atom at its estimated turning point (z_t≈3.4 Å for E_i≈ 18 meV). We find this to be the case for all trial wavefunctions that we have considered, indicating that relative HAS intensities can be well-estimated from
ℳ_𝐐ν^𝐤_i,𝐤_f≈ ∑_𝐆δ(𝐐+𝐆-Δ𝐊_fi)
× M_𝐐ν(z_t) sinc(Δ K_x R_x/2)sinc(Δ K_y R_y/2).
Computational methods.—To study the Nb(100) surface, we perform density-functional theory (DFT) calculations within the pseudopotential framework using open-source planewave software JDFTx <cit.>. We apply norm-conserving pseudopotentials <cit.> and calculate the electronic states for the outer electrons of niobium (4p^65s^24d^3) and helium (1s^2) at an effective temperature of 20 mH using a Fermi function to determine electronic occupancies. The exchange-correlation functional is approximated with the Perdew–Burke–Ernzerhof functional, as revised for solids (PBEsol) <cit.>. All calculations employ planewave cutoff energies of 30 H and 200 H for the electronic wavefunctions and charge density, respectively. We calculate a 10-layer slab of niobium with (100) surface termination in a cell 80 a_0 long along the surface normal direction and truncate Coloumb potentials to increase the accuracy of calculated surface properties <cit.>. We calculate a lateral lattice constant for Nb(100) at 3.30 Å, in good agreement with the experimental measurement of 3.29 Å <cit.>. Interatomic force constant matrices and helium interactions are calculated in a 3×3×1 supercell with a 𝐤-space sampling density equivalent to the unit cell's sampling of 12×12×1 𝐤-points. Finally, we transform into a maximally-localized Wannier function basis to interpolate helium–electron and electron–phonon scattering processes at arbitrary 𝐤 and perform a dense Monte Carlo sampling over the Brillouin zone to accurately evaluate scattering integrals <cit.>.
Results and discussion.—Figure <ref> shows predictions for inelastic HAS intensities at three levels of theory and compares the predictions to inelastic HAS measurements for Nb(100) <cit.>. The bottom panel depicts the least refined estimate that merely looks at the top-layer phonon density-of-states, after inserting 60 bulk dynamical matrix layers into the 10-layer Nb(100) slab, for shear vertical (SV) and longitudinal (L) polarizations. These two polarizations are the ones most commonly measured in HAS experiments and included in the distorted-wave Born approximation <cit.>. The middle panel of Fig. <ref> illustrates the current state-of-the-art model estimating inelastic HAS probabilities to be proportional to surface EPC strengths λ_𝐐ν <cit.>, but we refine this model to improve predictions by calculating electron–phonon matrix elements ab initio <cit.>. The top panel of Fig. <ref> gives the highest level of theory, corresponding to the expression from Eq. (<ref>), which now considers the full scattering diagram from Fig. <ref> and incorporates both the electron–phonon and helium–electron vertices ab initio.
Before assessing the predictions given at each level of theory, it is necessary to first understand HAS measurements to interpret the data. The density of measured points reflects the detectability of phonon modes, influenced by the intrinsic availability of the modes and experimental conditions. Atoms are big and slow relative to electrons, and inelastic scattering signals will be “cut-off” beyond certain values of phonon energy and wavevector because the atom is unable to excite those modes <cit.>. This cut-off factor is not absolute and depends on kinematic factors of the scattering atom that will affect the resulting signal-to-noise ratio <cit.>. Inelastic intensities are strongest near Γ, and data collection proceeds along the observed phonon branches until the signal becomes undetectable <cit.>. Hence, data-points abruptly stopping along a branch indicates the locations where the signal became undetectable.
As expected, the top-layer phonon density-of-states shown in the bottom panel in Fig. <ref> provides the crudest estimate to the inelastic HAS signal. This prediction strikingly misses the lower measured mode, incorrectly predicts the strongest signals at the edges of the surface Brillouin zone (SBZ), and overall illustrates why examining merely the phonon density-of-states conveys an inadequate picture of inelastic HAS signals. Next, the prediction given by the surface EPC strengths successfully captures both of the measured surface-phonon modes, but the signal predicted for both modes continues after the measured data stop, there appears to be spurious signal flaking off the upper mode at low wavevectors, and the signal predicted for the upper mode notably increases after the measured data stop and is strongest near the edges of the SBZ where no data have been measured. Finally, all of these incorrect features from the above approaches are corrected in the top panel of Fig. <ref> that shows the fully ab initio HAS analysis. Upon properly including the helium–electron interaction, the predicted signal of the lower mode decays in remarkable agreement with the measured data, there is hardly any extra signal between the two measured modes, and even though the fully ab initio method still predicts some signal after the data stop in the upper branch, the predicted signal nonetheless decays where the data stop and the most intense regions align well with the measurements.
The analysis above demonstrates the critical importance of a first principles evaluation of the helium atom–electron vertex in predicting and understanding the inelastic helium-atom scattering process. This work provides an adaptable, universal framework for computing inelastic atom–surface scattering and produces results of high accuracy. This theoretical approach will provide the needed guidance for the performance and interpretation of next-generation experiments using atomic beam scattering as a non-destructive probe of sensitive surfaces.
We would like to thank Caleb Thompson, Michael Van Duinen, and Steven Sibener for useful discussions regarding helium scattering experiments. This work was supported by the US National Science Foundation under award PHY-1549132, the Center for Bright Beams.
myunsrt
|
http://arxiv.org/abs/2306.05771v1
|
20230609092044
|
General theory of the viscosity of liquids and solids from nonaffine particle motions
|
[
"Alessio Zaccone"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.dis-nn",
"cond-mat.mtrl-sci",
"cond-mat.stat-mech",
"physics.chem-ph"
] |
=1
tarburst.fd
|
http://arxiv.org/abs/2306.09552v1
|
20230615234635
|
Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed Neural Network
|
[
"Song Han",
"Xingyu Liu",
"Huizi Mao",
"Jing Pu",
"Ardavan Pedram",
"Mark A. Horowitz",
"William J. Dally"
] |
cs.AR
|
[
"cs.AR"
] |
Geometric-Based Pruning Rules For Change Point Detection in Multiple Independent Time Series.
[
=============================================================================================
plain
plain
EIE proposed to accelerate pruned and compressed neural networks, exploiting weight sparsity, activation sparsity, and 4-bit weight-sharing in neural network accelerators. Since published in ISCA'16, it opened a new design space to accelerate pruned and sparse neural networks and spawned many algorithm-hardware co-designs for model compression and acceleration, both in academia and commercial AI chips. In retrospect, we review the background of this project, summarize the pros and cons, and discuss new opportunities where pruning, sparsity, and low-precision can accelerate emerging deep learning workloads.
§ WHAT WE DID WELL
We started this project as deep learning accelerators are bottlenecked by the memory footprint. Computation is cheap and memory is expensive. Existing algorithm and hardware stack accelerate the inference of a neural network “as is.” We asked, can we compress the model first? and we developed the “Deep Compression” <cit.> technique that can compress the weights of a neural network by an order of magnitude by pruning and quantization. Since pruned weights become zero, and zero multiplied by anything is still zero, we can potentially save the computation and memory. However, the resulting neural network is sparse and irregular, which conflicts with massively parallel computing, and runs inefficiently on general-purpose hardware.
EIE demonstrated that special-purpose hardware can make it cost-effective to do sparse operations with matrices that are upto 50% dense - while in software, density must be much less than 1% to overcome the overhead of the sparse package.
EIE exploits both weight sparsity and activation sparsity. It stores the weights in compressed sparse column format, parallelizes the computation by interleaving matrix rows over the processing elements, and detects the leading non-zero in activations. It not only saves energy by skipping zero weights but also saves the cycle by not computing it. EIE supports fine-grained sparsity, and allows pruning to achieve a higher pruning ratio.
EIE adopted aggressive weight quantization (4bit) to save memory footprint. To maintain accuracy, EIE decodes the weight to 16bit and uses 16bit arithmetic. This W4A16 approach (4-bit weight, 16-bit activation) is different from the conventional W8A8 approach. Such a design has been reborn in large language models (LLM). The single batch text generation of these models is dominated by matrix-vector multiplication — same as EIE. It is memory-bounded, and the weight memory is the bottleneck, not the activation — 4bit weight and 16bit activation become attractive to save memory and maintain accuracy at the same time, as adopted by many software LLM inference engines.[4bit LLM projects such as: https://arxiv.org/pdf/2210.17323.pdfGPTQ, https://arxiv.org/pdf/2306.00978.pdfAWQ,
https://github.com/ggerganov/llama.cppllama.cpp, https://github.com/mlc-ai/mlc-llmMLC LLM] However, these software solutions use linear integer weights, rather than a Kmeans codebook to make the weight decoding simpler and the arithmetic cheaper.
EIE demonstrates the opportunity for accelerator and neural network co-design. There’s plenty of room at the top to compress the neural network before accelerating it (Figure <ref>). Deep Compression and EIE show the benefit of refactoring the design stack.
§ LATER WORK
EIE generated a new wave of AI accelerator design by opening a new dimension: sparsity.
Cambricon-X <cit.> proposes a prefix-sum-based indexing module and supports sparse CNNs.
SCNN <cit.> utilizes outer product and scatter-add to process sparse CNN while maximizing the input data reuse.
Pragmatic <cit.> skips bit-level zeros and eliminates ineffectual computations.
UCNN <cit.> generalizes the sparsity problem to the repetition of weights with any value instead of zero.
Eyeriss V2 <cit.> proposes a flexible interconnect and PE architecture to accelerate sparse CNN.
ExTensor <cit.> hierarchically eliminates the computation in sparse tensor computations using an efficient intersection architecture.
SIGMA <cit.> proposes flexible interconnect to perform the distribution/reduction of sparse data for DNN training.
The Sparse Abstract Machine<cit.> targets sparse tensor algebra to reconfigurable and fixed-function spatial dataflow accelerators.
EIE had substantial impacts on commercial AI chip design, leveraging pruning and sparsity for higher efficiency. NVDLA <cit.> gates the pruned weights to save energy. NVIDIA Sparse Tensor Core <cit.> adopt structured 2:4 sparsity to speed up pruned models. Samsung NPU <cit.> uses a priority-based search algorithm to
skip zeros in activations. Ambarella CV22 <cit.> supports both structured and unstructured weight sparsity.
§ LESSONS
Notwithstanding that EIE started sparse acceleration, this technique isn't as easily applied to arrays of vector processors. There are several improved designs that solved the issue, including Sparse Tensor Core <cit.> that adopted structured sparsity (N:M sparsity), where one PE becomes more effective PEs in a regular manner. Another improvement is load-balance-aware pruning <cit.> to avoid PE starvation.
While EIE's special-purpose hardware is orders of magnitude more efficient than a software implementation of sparse M × V, the overhead of traversing the CSC structure is non-zero. One PE performs only one MAC, but is associated with many overhead structures, including pointer read, sparse matrix access, leading non-zero detector, etc. In EIE, the weight and index are both 4bit giving a 50% storage overhead. Other designs use structured sparsity or coarse-grained block sparsity to reduce storage and control overhead.
EIE only accelerates fully connected layers. Later, SCNN<cit.>, Cambricon-X <cit.> and Eyeriss-V2<cit.> can also accelerate sparse convolution layers. EIE stores all the weights in SRAM. Commercially, Cerebras tried this path to put everything in SRAM. This setting is perfect for vision models, but not easy for LLM: the number of parameters of recent LLMs ranges from 10 billion to 100 billion, making it difficult to fit SRAM.
§ NEW OPPORTUNITIES
DNN architecture has witnessed rapid change. After EIE, we developed hardware-aware neural architecture search (NAS) techniques, ProxylessNAS <cit.> and Once-for-all <cit.> that design small and fast models before model compression.
The first principle of efficient AI computing is to be lazy: avoid redundant computation, quickly reject the work, or delay the work. We show a few more examples.
After compressing the weights, the activation becomes the bottleneck. Therefore, we developed the MCUNet family<cit.> that aggressively shrinks the activation for TinyML. MCUNet performs not only ImageNet classification but also detection with only 256KB SRAM and 1MB Flash on a microcontroller. By sparse update and low precision, we can even do on-device training under 256KB memory<cit.>.
Generative AI: spatial sparsity persists in image editing or image in-painting; users don’t edit the whole image. So rather than generating the full image, sparsely generating where is edited <cit.> can speed up inference.
Transformer is a major neural architecture after EIE, and FC layer is back again. The attention layer has no weights to prune. However, not all tokens are useful: SpAtten <cit.> proposes cascade token-pruning and gradually removes redundant tokens with the smallest attention score. It exploits “progressive quantization” that lazily fetches MSBs only, run inference; if the confidence is low, it fetches LSBs.
Temporal sparsity exists in videos. Adjacent frames are similar. Rather than using expensive 3D convolution, temporal shift <cit.> can efficiently exploit temporal redundancy with zero FLOPs.
Point cloud is spatially sparse. TorchSparse <cit.> adaptively groups sparse matrices to trade computation for regularity. PointAcc <cit.> employs a sorting array to perform sparse input-output mapping and avoid zero computation.
We envision future AI models will be sparse at various granularity and structures.
Co-designed with specialized accelerators, sparse models will become more efficient and accessible.
§ ACKNOWLEDGEMENTS
We thank Zhekai Zhang and Yujun Lin for the discussions and collecting data for the figure.
ieeetr
|
http://arxiv.org/abs/2306.06341v1
|
20230610045258
|
Mapping Molecular Hamiltonians into Hamiltonians of Modular cQED Processors
|
[
"Ningyi Lyu",
"Alessandro Miano",
"Ioannis Tsioutsios",
"Rodrigo Cortinas",
"Kenneth Jung",
"Yuchen Wang",
"Zixuan Hu",
"Eitan Geva",
"Sabre Kais",
"Victor S. Batista"
] |
quant-ph
|
[
"quant-ph"
] |
We introduce a general method based on the operators of the Dyson-Masleev transformation to map the Hamiltonian of an arbitrary model system into the Hamiltonian of a circuit Quantum Electrodynamics (cQED) processor. Furthermore, we introduce a modular approach to program a cQED processor with components corresponding to the mapping Hamiltonian. The method is illustrated as applied to quantum dynamics simulations of the Fenna-Matthews-Olson (FMO) complex and the spin-boson model of charge transfer. Beyond applications to molecular Hamiltonians, the mapping provides a general approach to implement any unitary operator in terms of a sequence of unitary transformations corresponding to powers of creation and annihilation operators of a single bosonic mode in a cQED processor.
§ INTRODUCTION
The development of quantum computing simulations for modeling chemical systems is a subject of immense interest. Recent studies have already explored the potential of quantum computing as applied to electronic structure calculations,<cit.>
quantum dynamics simulations<cit.>
as well as simulations of molecular spectroscopy.<cit.>
Currently quantum computing facilities are often called noisy intermediate-scale quantum (NISQ) computers,<cit.> due to their intrinsic limitations, including architectures based on superconducting circuits,<cit.> trapped ions,<cit.> and nuclear magnetic resonance <cit.>.
To achieve moderate accuracy and reliability in spite of noise and decoherence, simulations of chemical systems have relied on hybrid quantum-classical algorithms, including the variational quantum eigensolver (VQE) method<cit.> and quantum machine learning methods<cit.> where only part of the computation is performed on the quantum computer, sometimes applied with the aid of error mitigation techniques,<cit.> while the rest of the calculation is run on a conventional computer.
New hardware settings that can fundamentally mitigate the aforementioned errors of quantum computing architectures are necessary to enable fault-tolerant quantum computations of chemical systems. A promising paradigm-shifting technology involves the development of bosonic circuit Quantum Electrodynamics (cQED) processors where information is stored as microwave photons in the unbounded Hilbert space of superconducting oscillator modes. The non-linearity necessary for control and readout procedures is provided by quantum circuits based on ancillary Josephson junctions.<cit.> Bosonic cQED devices offer favorable platforms for quantum error correction codes as a result of the well understood dominant source of errors in oscillator modes, namely, the single-photon loss. <cit.> Moreover, encoding information in multiple levels of an oscillator can be more efficient when compared to conventional cQED architectures where the storage of information utilizes only the first two-levels of a transmon.
cQED bosonic devices have already been shown to offer unparalleled capabilities for simulations of vibronic spectra of small molecules such as water, ozone, nitrogen dioxide and sulfur dioxide, when mapping the calculation of Franck-Condon factors into a Gaussian boson sampling problem.<cit.> The corresponding calculations on a conventional quantum computer would require 8 qubits and 𝒪(10^3) gates, exceeding the capabilities of current technologies. Therefore, it is natural to anticipate that cQED bosonic devices could be applied to solve other classes of interesting problems in chemistry and offer advantages beyond the capabilities of conventional quantum computers. However, a general approach to design a quantum circuit to simulate an arbitrary molecular system has yet to be established. Here, we address the fundamental question regarding how to map the Hamiltonian of a molecular system into the corresponding Hamiltonian of a programmable cQED bosonic simulator. We introduce the single-bosonic-mode (SBM) mapping, allowing us to represent any square matrix as a polynomial of powers of creation and annihilation operators of a bosonic mode. The mapping thus provides a general protocol for transforming any Hamiltonian into the Hamiltonian of a cQED device, since the Hamiltonian of a cQED device can be written as a polynomial of powers of creation and annihilation operators of a single bosonic mode<cit.>. Additionally, we introduce a modular approach to program a cQED processor according to the SBM mapping Hamiltonian. In particular, we identify circuits with Superconducting Nonlinear Asymmetric Inductive eLements (SNAILs)<cit.> that could be coupled by beam-splitters, or by nearly-quartic elements<cit.> for programming one-qubit gates and the two-qubit controlled-Z gate that enable universal computing.
We illustrate the SBM mapping in conjuction with SNAIL gates as applied to model simulations of quantum dynamics in the photosynthetic Fenna-Matthews-Olson (FMO) complex, a system that mediates the excitation energy transfer from light-harvesting chlorosomes to the bacterial reaction center. Additionaly, we illustrate the SBM mapping as applied to simulations of charge or energy transfer processes with dissipation according to the spin-boson model. Beyond applications to molecular Hamiltonians, the SBM mapping provides a general approach for implementing any unitary operator in terms of a sequence of unitary transformations corresponding to powers of creation and annihilation operators of single-bosonic modes in a cQED processor.
The paper is organized as follows. Section <ref> introduces the SBM mapping method. Section <ref> provides the implementation of one-qubit gates with capacitively shunted SNAILs, and the two-qubit controlled-Z gate with nearly-quartic elements. Section <ref> demonstrates the SBM mapping with SNAIL circuit implementation as applied to quantum dynamics simulations of a series of models typically employed to simulate charge and energy transfer processes. Conclusions are outlined in Section <ref>.
§ SINGLE-BOSONIC MODE MAPPING
The SBM mapping transforms an arbitrary Hermitian operator,
Ĥ=∑_α=0^k-1∑_α'=0^k-1H_αα'|α⟩⟨α'|,
in the basis set {|α⟩} of the system of interest, into the following polynomial of products of powers of operators of a single bosonic mode (â, â^†), as follows:
Ĥ_sbm=∑_m=0^k-1∑_n=0^k-1H_nmP̂_nm.
where
P̂_nm≡1/(k-1)!^2√(m!/n!)(â^†)^n Γ̂_k^k-1 (â^†)^k-1-m.
with
Γ̂_k=((k-1)-N̂)â,
where N̂=â^†â. Appendix <ref> shows that Γ̂_k corresponds to the operator Ŝ^†_+ of the Dyson-Maleev transformation.<cit.>
To derive the mapping introduced by Eq. (<ref>), we map the operators |α⟩⟨α'| introduced by Eq. (<ref>) into the corresponding transition operators |m⟩⟨ n| in the basis of the 1-dimensional harmonic oscillator (HO), satisfying â|m⟩=√(m)|m-1⟩, â^†|m⟩=√(m+1)|m+1⟩.
We can verify that
|0⟩⟨ k-1|=Γ̂_k^k-1/(k-1)!^3/2,
in the subspace of the first k eigenstates of the HO (Appendix <ref>). Therefore, Γ̂_k^k-1/(k-1)!^3/2 effectively acts as the transition operator |0⟩⟨ k-1|. As shown in Appendix <ref>, the definition of Γ̂_k leads to a block diagonal representation of operators. For example, for k=3, we obtain:
Γ̂_3^2/2^3/2=(
[ 0 0 1 0 0 0 0 …; 0 0 0 0 0 0 0 …; 0 0 0 0 0 0 0 …; 0 0 0 0 0 √(10) 0 …; 0 0 0 0 0 0 3√(15) …; 0 0 0 0 0 0 0 …; 0 0 0 0 0 0 0 …; … … … … … … … ⋱ ]),
showing that the matrix representation of |0⟩⟨ 2| is indeed recovered from the top 3× 3 diagonal block.
Next, substituting Eq. (<ref>) into the expression of | n ⟩⟨ m |, and considering that | n ⟩ = (â^†)^n/√(n!)| 0 ⟩ and ⟨ m | = ⟨ k-1 | (â^†)^k-m-1√(m!/(k-1)!), we obtain that any operator P̂_nm≡ |n⟩⟨ m|, with n,m < k, can be represented according to Eq. (<ref>).
Note that Eq. (<ref>) is an operator of a single bosonic mode, which corresponds to a single k-qudit gate for the mapping of a k× k Hamiltonian. In particular, when k=2, Eq. (<ref>) provides the mapping of any 2× 2 hermitian operator into an operator of a single bosonic mode, allowing for construction of any bosonic 1-qubit gate with readily available superconducting devices.
Appendix <ref> describes the relationship between the SBM mapping and the established Dyson-Maleev (DM) and Holstein-Primakoff (HP) mappings, used to map spin operators into bosonic operators. The DM and HP mappings use one bosonic mode per spin site so they do not allow for the possibility of a single k-qudit gate. Furthermore, although both DM and HP mappings use bosonic operators, they are not able to construct the well-restricted bosonic Hamiltonian necessary for a quantum computing scheme. The DM mapping uses non-Hermitian bosonic operators which do not directly transfer to be unitary quantum gates upon exponentiation, while the operator square root term in the HP mapping is known to be hard to represent without a perturbative approach, which restricts implementation into quantum gates.
§ MODULAR QUANTUM CIRCUITS
This section introduces a modular design of quantum circuits based on driven Superconducting Nonlinear Asymmetric Inductive eLements (SNAIL) with a capacitive shunt,<cit.> parametrized according to SBM Hamiltonians.
We begin by introducing the SBM mapping
of 2× 2 hermitian matrices describing 1-qubit gates.
The operator Γ_k introduced by Eq. (<ref>), with k=2, is defined as follows:
Γ̂_2 = (1 - â^†â) â,
= â - â^†â^2,
so any 2 × 2 matrix can be written according to Eq. (<ref>), as follows:
Ĥ_sbm = ∑_j,k=1^2 H_jkP̂_j,k,
where P̂_1,2 = â-â^†â^2, P̂_2,2 = â^†â, P̂_1,1 = 1-â^†â, and
P̂_2,1 = â^†-(â^†)^2 â.
Defining H_12=R_12 e^i ϕ_12 with real valued R_12 and ϕ_12 and introducing the substitution b̂ = â e^i ϕ_12, we obtain:
Ĥ_sbm = H_11 + (H_22 - H_11) b̂^†b̂ + R_12 (b̂+b̂^†)-R_12(b̂^†b̂^2 +(b̂^†)^2 b̂),
=H_11 + (H_22 - H_11) b̂^†b̂ + R_12 (b̂+b̂^†)-R_12((b̂^†+b̂)^3/3 -(b̂^†+b̂)-b̂^†^3+b̂^3/3),
=H_11 + ħωb̂^†b̂ + 2 R_12 (b̂+b̂^†) + g_3 (b̂+b̂^†)^3 +g_3(b̂^†^3+b̂^3),
where ħω = H_22 - H_11, and g_3=-R_12/3.
Considering that the Hamiltonian of a capacitively shunted SNAIL (Fig. <ref>) is <cit.>
Ĥ_SPA= ħωb̂^†b̂ + g_3 (b̂+b̂^†)^3 + g_4 (b̂+b̂^†)^4,
we can readily identify the Hamiltonian Ĥ, introduced by Eq. (<ref>), as the Hamiltonian of a linearly driven (displaced) SNAIL,
Ĥ_sbm = H_11 + 2 R_12 (b̂+b̂^†) + Ĥ_SPA +g_3(b̂^†^3+b̂^3),
with the fourth-order term turned off (g_4=0).
Note that the term g_3(b̂^†^3+b̂^3) in Eq. (<ref>) can be produced by driving the SNAIL at a frequency ω_3 ≈ 3 ω. Indeed, a four-wave mixing interaction would be able to implement such term in a frame rotating at ω_3/3 <cit.>. We want to emphasize that, despite the assumption of g_4=0, four-wave mixing can still be implemented by cascaded three-wave mixing processes <cit.>.
More generally, a SNAIL can be substituted by an arbitrary flux-biased Josephson circuit <cit.>, providing additional freedom for the choice of ω and g_3 coefficients in the Hamiltonian introduced by Eq. (<ref>). Consequently, a wide range of combinations of the coefficients H_ij can be engineered at the hardware level. We note that despite the generality of Eqs. (<ref>) and (<ref>), cases with ω≤ 0 for a physical oscillator might be energetically unstable, which would impose limitations on the construction of arbitrary 2× 2 hermitian matrices. However, it is verified in Appendix <ref> that all R_z and R_x gates can be implemented under this restriction. As these gates constitute a 1-qubit universal set, the hardware setting proposed in Fig. <ref> can be used to construct arbitrary 1-qubit gates.
To establish a universal set of quantum gates, a 2-qubit entangling gate (e.g., a controlled-Z gate) is required. This requirement can be fulfilled by a modular design of driven SNAIL circuits nonlinearly coupled by nearly-quartic elements, effectively described by a 4× 4 Hamiltonian, as shown in Fig. <ref>(a).
A nearly-quartic element can be implemented, for instance, by a SNAIL designed with an unusual combination of Josephson junctions or a dc-SQUID.<cit.>
More in general, any superconducting two-terminal circuit whose potential energy function U can be approximated as,
U(φ) ≈a/4!(φ-φ_0)^4 + O((φ-φ_0)^5),
can implement such nearly-quartic element. In Eq. (<ref>), φ is the phase difference across the terminals of the superconducting circuit implementing the potential energy U and a=.d^4U/dφ^4|_φ_0 is the fourth-order Taylor expansion coefficient of the function U, evaluated at the point φ_0 which minimizes U. While it is possible to implement the potential energy in Eq. (<ref>) exactly, <cit.> in practice, any two-terminal circuit including one or more Josephson tunnel junctions is shunted by an intrinsic capacitance that introduces a weak linear coupling between the two terminals. Such linear capacitive coupling arises from the intrinsic capacitance of the Josephson tunnel junctions, and can be neglected when the fourth order nonlinearity implemented by U is the dominant coupling mechanism between the two terminals (i.e., the "nearly-quartic" coupling limit).
A nearly-quartic element can be used to implement ultra-strong cross-Kerr couplings <cit.> between photonic modes described by the interaction Hamiltonian,
Ĥ_cross-Kerr=χb̂_1^†b̂_1b̂_2^†b̂_2,
where b̂_1 and b̂_2 are the annihilation operators of the two coupled photonic modes. The nearly-pure and ultra-strong cross-Kerr coupling can enable the construction of many photonic 2-qubits gates, including a controlled-Z gate as shown in Appendix <ref>. Therefore, with a combination of one-qubit gates and the two-qubit gate, constructed as quartic-connected SNAILs, it is possible to map any physical Hamiltonian into a modular cQED processor.
Design of a multiple-qubit entangling gate is realized by the circuit of multiple SNAILs coupled with nearly-quartic elements. As an example, Fig. <ref>(b) illustrates the circuit that effectively maps the 8× 8 Hamiltonian corresponding to a 3-qubit entangling gate. Alternatively, bilinear couplings can also be established by beams splitters,<cit.> as previously investigated for transmons.<cit.>
The 4 × 4 and 8 × 8 circuits in Fig. <ref> can be generalized as well to include arbitrary flux-biased Josephson circuits as a replacement for the SNAILs and the nearly-quartic couplers.
§ DYNAMICS OF CHARGE AND ENERGY TRANSFER
A variety of important dynamical processes in molecular systems of chemical, biological and technological importance involve electronic energy and charge transfer. The simulation of the inherently quantum-mechanical electronic dynamics underlying these processes is a subject of great interest. In this section, we illustrate the SBM mapping based on the SNAIL circuit as applied to quantum dynamics simulations of energy and charge transfer in model systems, including a four-level system describing energy transfer in the FMO light-harvesting complex, and a spin-boson model that describes charge transfer in the presence of dissipation, schematically represented in Fig. <ref>.
§.§ Two-level system (TLS)
The simplest model of energy or charge transfer is given by the 2 × 2 donor-acceptor Hamiltonian, <cit.>
Ĥ_TLS=[ -ϵ Δ; Δ ϵ; ],
describing two coupled electronic states, with ϵ=50 cm^-1 and Δ=20 cm^-1 for a typical charge transfer process in molecules. To map the 2× 2 Hamiltonian into the circuit of Fig. <ref>, the SNAIL parameters are obtained according to Eq. (<ref>), with oscillator frequency ω=100 cm^-1, linear displacement R_12=50 cm^-1 and third-order coupling g_3=-16.7 cm^-1. With these parameters, the right hand side of Eq. (<ref>) is programmed on a classical computer and numerically exponentiated to obtain the corresponding propagator for dynamics simulations. Fig. <ref>a shows the simulation results for the time-dependent population of the donor state. The exact agreement with benchmark calculations obtained by numerically integrating the Schrödinger equation demonstrates the SBM mapping and the proposed SNAIL-based circuit.
§.§ Fenna–Matthews–Olson Complex
Energy transfer through the chlorophyll pigments of the Fenna–Matthews–Olson (FMO) complex (Fig. <ref>a) corresponds to exciton transfer across chromophore sites. The excitons are modeled as hard-core bosons <cit.>, according to the Frenkel exciton Hamiltonian,<cit.>
H=E_j∑_j^σ_j^+σ_j^-+J_jk∑_j,k^(σ_j^+σ_k^-+σ_k^+σ_j^-),
where σ_j^+ and σ_j ^- are the Pauli-raising operator and lowering operators,
corresponding to the creation and annihilation of an excitation in chromophore j, with commutation rules [σ_j^-,σ_k^+]=δ_jk(1-2 σ_j^+σ_k^-).
The Hamiltonian can be written in the basis of chromophore occupation number. We consider the energy transfer through sites 1-4 (Fig. <ref>a), as described by the following 4 × 4 Hamiltonian matrix: <cit.>
Ĥ_FMO=[ 310.0 -97.9 5.5 -5.8; -97.9 230.0 30.1 7.3; 5.5 30.1 0.0 -58.8; -5.8 7.3 -58.8 180.0 ],
with parameters in cm^-1. Diagonal terms correspond to the energies of the chromophore while off-diagonal terms are the couplings between them.
To parametrize the superconducting circuit for dynamics simulations, with an integration time-step τ, we obtain the propagator Û_FMO=e^-iτĤ_FMO/ħ as a 4× 4 unitary matrix. This 2-qubit gate is then transpiled in terms of SNAILs parametrized according to the set of elementary gates including 1-qubit rotations and controlled-Z gates. Note that
we are able to convert the Pauli operators into single boson operators based on the SBM mapping, offering advantages
over conventional bosonization methods such as the Holstein–Primakoff <cit.>, or the Dyson–Maleev transformation <cit.>
(Appendix <ref>). Analogous implementations could also be applied to model fermionic Hamiltonians commonly encountered in quantum chemistry, when converted into sums of tensor products of Pauli operators in conjunction with the Jordan-Wigner transformation and then mapped into bosonic gates.
To obtain the SNAIL parameters for a 1-qubit rotation Û_1-qubit, we compute the effective Hamiltonian Ĥ_eff=-i log(Û_1-qubit), then we map that Hamiltonian as Ĥ_eff,sbm according to Eq. (<ref>), and we obtain the corresponding rotation gate, as follows: Û_eff,SBM=e^-iĤ_eff,sbm. The circuit is simulated by arranging the gates Û_eff,SBM according to the transpiled circuit diagram, with CZ gates corresponding to two SNAILs coupled by a nearly-quartic element, as described in Sec. <ref>. Fig. <ref> shows a schematic representation of the resulting simulation.
Fig. <ref>b shows the results of simulations of the exciton dynamics for site 1, which is initially fully populated and gets depopulated according to the energy transfer process. The agreement between the results obtained with the SBM-mapped Hamiltonian and the reference calculations further demonstrates the capabilities of the SBM-SNAIL circuit design.
§.§ Dynamics of Open Quantum Systems
This section demonstrates the capabilities of the modular design of quantum circuits based on the SBM-mapping, as applied to dynamics simulations of open quantum systems.
We focus on the spin-boson model including two electronic states coupled to a bath of displaced harmonic oscillators, described in Appendix <ref>, recently analyzed with
tensor-train thermo-field memory kernels for generalized quantum master equations. <cit.>
Our propagation scheme is based on the so-called population-only Liouville space superoperator 𝒫^pop(t) that satisfies the following equation:
σ̂^pop(t)=𝒫^pop(t)σ̂^pop(0),
where σ̂(t)=Tr_n[ρ̂(t)] is the reduced density matrix for the electronic DOFs, with ρ̂(t) the density matrix for the full vibronic system. Here, σ̂^pop(t)=(σ_00(t),σ_11(t))^T includes only the diagonal elements of σ̂(t), necessary to describe the electronic population dynamics. The preparation of the super-operator 𝒫^pop(t) is described in Appendix <ref>.
We compare the elements of σ̂^pop(t) obtained according to Eq. (<ref>) with the corresponding time-dependent populations obtained according to the quantum computational scheme based on the SBM-mapping. To perform quantum computing simulations based on Eq. (<ref>), we first transform 𝒫^pop(t) into a unitary matrix using the Sz.-Nagy dilation theorem<cit.>, as follows: <cit.>
𝒰_𝒫^pop(t)=[ 𝒫^pop(t) √(I-𝒫^pop(t)𝒫^pop^†(t)); √(I-𝒫^pop^†(t)𝒫^pop(t)) -𝒫^pop^†(t); ].
The vectorized v_σ(0) is dilated by appending ancillary zero elements, as follows:
σ̂^pop(0)=(σ_00(0),σ_11(0))^T→σ̃^pop(0)=(σ_00(0),σ_11(0),0,0)^T.
The dilated time-updated population-only density matrix is obtained, as follows:
σ̃^pop(t)=𝒰_𝒫^pop(t)σ̃^pop(0).
The dilation scheme thus provides the unitary matrix 𝒰_𝒫^pop(t) governing the time-evolution of σ̃^pop(t), the first two digits of which agree with those of σ̂^pop(t). Therefore, Eqs. (<ref>) and (<ref>) describe the same dynamics, with Eq. (<ref>) allowing for simulations on a quantum device.
For the spin-boson model of interest 𝒰_𝒫^pop(t) is a 4× 4 unitary matrix, corresponding to a 2-qubit gate. Therefore, the SBM-SNAIL circuit is analogous to that of the FMO 4-site model. The simulation of the circuit thus follows the scheme of Fig. <ref>. The transpiled circuit and the corresponding SNAIL gate parameters for 𝒰_𝒫^pop(t=1 a.u.) are given in Fig. <ref>.
Figure <ref> shows the comparison of time-dependent populations for the two electronic states corresponding to the spin-boson model, as described by elements of σ̃^pop_sbm(t) obtained with the SBM-mapping with SNAIL circuit scheme, and the corresponding populations σ̃^pop(t) obtained directly with Eq. (<ref>), with initial condition σ̃^pop(0)=(1,0,0,0)^T. The excellent agreement demonstrates the capabilities of the SBM mapping as applied to a model of electron transfer with dissipation due to coupling to a surrounding environment.
§ CONCLUDING REMARKS
We have introduced a general method to map the Hamiltonian of molecular systems into the Hamiltonian of quantum circuits for cQED simulations.
Additionally, we have identified the non-linear bosonic components that need to be assembled for a modular implementation of the corresponding circuit Hamiltonians.
We have illustrated the SBM mapping, in conjunction with SNAIL circuits, as applied to simulations of energy transfer in the photosynthetic FMO model system, and charge transfer in donor-acceptor systems coupled to a dissipative environment.
Beyond the modular design based on SNAILs, we have shown that the SBM mapping allows for implementation of Hamiltonians in the basis of qudits (i.e., Eq.(<ref>), with N>2), corresponding to continuous-variable (CV) modes represented as N-dimensional discrete-variable (DV) states.
For circuits with multiple qudits, the cross-Kerr Hamiltonian may also be generalized to perform a qudit controlled-Z gate allowing for construction of a universal set of gates for simulations on bosonic devices. The hardware efficiency of a qudit-based cQED can significantly reduce the circuit depthand simplify the experimental setup, offering a promising strategy for simulations of chemical systems.
§ ACKNOWLEDGEMENTS
The authors acknowledge support from the NSF grant 2124511 [CCI Phase I: NSF Center for Quantum Dynamics on Modular Quantum Devices (CQD-MQD)]. We thank Ellen Mulvihill for helpful discussions and for preparing Fig. <ref>. N.L. thanks Micheline B. Soley and Paul Bergold for stimulating discussions.
§ DYSON-MALEEV AND HOLSTEIN-PRIMAKOFF MAPS
Dyson and Maleev introduced a transformation <cit.> to represent spin operators in terms of bosonic operators according to the ladder operators,
Ŝ_+ = â^†[2s-N̂],
Ŝ_- = â,
Ŝ_z = N̂-s,
with N̂=â^†â, and [â, â^†]=1.
Notice that Ŝ^†_- ≠Ŝ_+, so the ladder operators are not Hermitian conjugates of each other and thus the transformation is not unitary. Nevertheless, Eqs. (<ref>) satisfy the Lie algebra of the original spin operators,
Ŝ_±= Ŝ_x ± iŜ_y,
[Ŝ_+,Ŝ_-] = 2 Ŝ_z,
and
[Ŝ_i,Ŝ_j] = i∑^3_k=1ε_ijkŜ_k,
where ε_ijk is the levi-citta symbol and i,j,k ∈{x,y,z}.
If we replace the magnitude of the spin, s, by the number (k-1)/2 in Eq. (<ref>), we obtain:
Ŝ^†_+ =((k-1)-N̂)â.
Comparing Eq. (<ref>) and Eq. (<ref>), we see that Ŝ^†_+ is identical to the operator Γ̂_k. The other operator that we use in the SBM mapping is â, which is in turn the operator Ŝ_- introduced by Eq. (<ref>). Therefore, our SBM mapping implements the raising and lowering operators of the Dyson-Maleev transformation. The major difference between the SBM mapping and DM transformation lies in the fact that the DM mapping
replaces the operators Ŝ_+ and Ŝ_-, according to Eqs. (<ref>), and therefore generates a Hamiltonian in terms of â and â^† that is not Hermitian. On the other hand, the SBM mapping preserves the Hermitian property by using the P̂_nm operators to map the matrix elements of the Hamiltonian so the full matrix representation is automatically preserved.
Similar to the Dyson-Maleev transformation,<cit.> the Holstein-Primakoff transformation <cit.> maps the spin operators for a spin-s particle to bosonic operators, as follows:
Ŝ_+ =â^†√(2s-N̂),
Ŝ_- =√(2s-N̂)â,
Ŝ_z =N̂-s.
Comparing Eq. (<ref>) to Eqs. (<ref>), we see that here Ŝ^†_+=Ŝ_- but differs from the operator Γ̂_k by a factor of √(2s-N̂). Unfortunately, the square root of the number operator is challenging to implement without relying upon a perturbative expansion, which is only accurate when s is sufficiently large. In contrast, the SBM mapping is generally applicable.
§ BLOCK-DIAGONALITY
In this section, we prove that the right-hand side (rhs) of Eq. (<ref>) is block-diagonal, ensuring that the physical space of states | j ⟩ with j<k remains decoupled from the unphysical space of states | j ⟩ with j ≥ k. Specifically, we show that the rhs of Eq. (<ref>) has the following block-diagonal form:
Ĥ_sbm=(
[ H_0,0 H_0,1 ⋯ H_0,k-1 0 0 0 …; H_1,0 H_1,1 ⋯ H_1,k-1 0 0 0 …; ⋮ ⋮ ⋱ ⋮ 0 0 0 …; H_k-1,0 H_k-1,1 ⋯ H_k-1,k-1 0 0 0 …; 0 0 0 0 X X X …; 0 0 0 0 X X X …; 0 0 0 0 X X X …; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ])
To achieve this, we show that P̂_nm, introduced by Eq. (<ref>), has the block-diagonal form,
P̂_nm=(
[ 0 0 ⋯ 0 0 0 0 …; 0 1 ⋯ 0 0 0 0 …; ⋮ ⋮ ⋱ ⋮ 0 0 0 …; 0 0 ⋯ 0 0 0 0 …; 0 0 0 0 X X X …; 0 0 0 0 X X X …; 0 0 0 0 X X X …; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ]),
where only the nm-th element is equal to 1. Substituting Eq. (<ref>) into Eq. (<ref>) yields the matrix form in Eq. (<ref>).
First, we show that ⟨ n |Γ̂_k^k-1 |j⟩ = 0, for all j,n <k, unless j=k-1 and n=0. Considering that
Γ̂_k|j⟩ =((k-1)Î-N̂)â|j⟩
=(k-j)√(j)|j-1⟩,
Γ̂_k^2 |j⟩,
=(k-j)(k-(j-1)) √(j (j-1))|j-2⟩,
we obtain
Γ̂_k^l-1 |j⟩ =(k-j)(k-(j-1)) ⋯ (k-(j-(l-2)))
×√(j (j-1) ⋯ (j-(l-2)))|j-(l-1)⟩.
So, ⟨ n |Γ̂_k^k-1 |j⟩ = 0, unless j-(k-1)=n, a condition that can only be fulfilled for j,n < k when j=k-1 and n=0, for which
Γ̂_k^k-1 | k-1 ⟩ = (k-1)!^3/2| 0 ⟩.
Now we prove Eq. (<ref>) by showing that ⟨ j |P̂_nm| l ⟩=δ_jnδ_lm. We start by showing that ⟨ n |P̂_nm| m ⟩=1, as follows:
⟨ n |P̂_nm| m ⟩ =⟨ n|1/(k-1)!^2√(m!/n!)(â^†)^n Γ̂_k^k-1 (â^†)^k-1-m|m⟩
=⟨ 0|1/(k-1)!^2√(n!)√(m!/n!)Γ̂_k^k-1√((k-1)!/m!)|k-1⟩
=1/(k-1)!^3/2⟨ 0|Γ̂_k^k-1|k-1⟩
=1/(k-1)!^3/2⟨ 0|(k-1)!^3/2|0⟩
=1.
Next, we show that all other elements in the upper-left k× k block ⟨ j |P̂_nm| l ⟩=0, when j=0,1,…,n-1,n+1,…,k-1 and l=0,1…,m-1,m+1,…, k-1.
We consider three cases: (a) j<n; (b) l<m, and (c) l>m, as follows:
(a). When j<n, ⟨ j|(a^†)^n=0. Therefore,
⟨ j|P̂_nm|l⟩ =⟨ j|1/(k-1)!^2√(m!/n!)(â^†)^n Γ̂_k^k-1 (â^†)^k-1-m|l⟩=0.
(b). When l<m, ⟨ j|P̂_nm|l⟩=0, since Γ̂_k^k-1(a^†)^k-1-m|l⟩∝Γ̂_k^k-1|l+k-1-m⟩, and then according to Eq. (<ref>), Γ̂_k^k-1|l+k-1-m⟩=0, since l-m<0.
(c). When l>m, we obtain (a^†)^k-1-m|l⟩∝|k-1-m+l⟩. So, according to Eq. (<ref>), Γ̂_k^k-1(a^†)^k-1-m|l⟩=0 since k-1<k-1-m+l<2k-1, and Γ̂_k^k-1 |j⟩
=(k-j)(k-(j-1)) ⋯ (k-(j-(k-2))) √(j (j-1) ⋯ (j-(k-2)))|j-(k-1)⟩ = 0 when j=k,k+1, …, 2k-2 since (k-j)(k-(j-1)) ⋯ (k-(j-(k-2)))=0.
To establish block-diagonality, we next show that P̂_nm vanish when: (d) j≤ k-1 and l>k-1, and also when (e) l≤ k-1 and j>k-1, as follows:
(d). j≤ k-1 and l>k-1. This case is further divided into two scenarios: (i) l<k+m, or (ii) l≥ k+m, as follows:
(i) l<k+m. Similarly to case (c), here Γ̂_k^k-1(a^†)^k-1-m|l⟩=0 since k-1-m+l<2k-1. Therefore, ⟨ j |P̂_nm| l ⟩=0.
(ii) l≥ k+m. In this case, according to Eq. (<ref>),
and Eq. (<ref>),
⟨ j|P̂_nm|l⟩ =⟨ j|1/(k-1)!^2√(m!/n!)(â^†)^n Γ̂_k^k-1 (â^†)^k-1-m|l⟩
∝⟨ j-n|Γ̂_k^k-1|l+k-1-m⟩
∝⟨ j-n|l-m⟩.
Considering that l≥ k+m, and k-1≥ j, we obtain l-m≥ k ≥ j+1 > j-n, so
⟨ j|P̂_nm|l⟩∝⟨ j-n|l-m⟩=0.
(e). l≤ k-1 and j>k-1. The argument is analogous to that for case (d).
Considering cases (a)–(e), we obtain the matrix representation for P̂_nm given Eq. (<ref>). Since n and m can be any integer from 0 to k-1, we prove Eq. (<ref>).
§ IMPLEMENTING R̂_Z AND R̂_X WITH A SNAIL
This section shows that any 1-qubit rotation on the surface of the Bloch sphere can be implemented by using a SNAIL device, introduced in Eq. (<ref>), thus enabling a universal set of 1-qubit gates.
The rotation around the z axis by λ has the following matrix representation:
R̂_z(λ)=[ 1 0; 0 e^iλ ],
which can be implemented as R̂_z(λ)=e^-i Ĥ_z t by propagating for time t=1 a quantum circuit with the effective Hamiltonian,
Ĥ_z(λ)=[ 0 0; 0 -λ ].
Implementing Eq. (<ref>) with Eq. (<ref>) requires λ<0, which correspond to negative rotation angles along the z axis. Noting that any positive rotation angle λ' with 0<λ'<2π is equivalent to the negative rotation angle λ=-2π+λ', we show that any rotation around the z axis can be implemented according to Eq. (<ref>).
The rotation around the x axis by θ has the following matrix representation:
R̂_x(θ)=[ cos(θ/2) -isin(θ/2); -isin(θ/2) cos(θ/2) ],
which correspond to the effective Hamiltonian:
Ĥ_x(θ)=[ 0 θ/2; θ/2 0 ].
Mapping Eq. (<ref>) into Eq. (<ref>) requires ω=0 –i.e., elimination of the linear component by tuning the magnetic flux such that the linear inductance is cancelled out.
§ CONTROLLED-Z GATES WITH QUARTIC ELEMENTS
This section follows and expands Ref. [] to show that the cross-Kerr Hamiltonian,
Ĥ_cross-Kerr=χb̂_1^†b̂_1b̂_2^†b̂_2,
implemented with a nearly-quartic element, corresponds to a controlled-Z gate in the basis of Fock states |0⟩ and |1⟩. We show that e^-iπb̂_1^†b̂_1b̂_2^†b̂_2 keeps the basis states |0⟩|0⟩, |0⟩|1⟩ and |1⟩|0⟩ unchanged, while introducing a phase shift of -1 to state |1⟩|1⟩.
We apply
e^-iĤ_cross-Kerrt to the outer product states |0⟩|0⟩, |0⟩|1⟩, |1⟩|0⟩, and |1⟩|1⟩ with t=π/χ, so that e^-iĤ_cross-Kerrt=e^-iπâ^†âb̂^†b̂.
Applying e^-iπb̂_1^†b̂_1b̂_2^†b̂_2 to |0⟩|0⟩, we obtain:
e^-iπb̂_1^†b̂_1b̂_2^†b̂_2|0⟩|0⟩ =e^-iπ· 0· 0|0⟩|0⟩,
=|0⟩|0⟩.
Similarly,
e^-iπb̂_1^†b̂_1b̂_2^†b̂_2|0⟩|1⟩ =e^-iπ· 0· 1|0⟩|1⟩,
=|0⟩|1⟩,
and
e^-iπb̂_1^†b̂_1b̂_2^†b̂_2|1⟩|0⟩ =e^-iπ· 1· 0|1⟩|0⟩,
=|1⟩|0⟩.
Finally,
e^-iπb̂_1^†b̂_1b̂_2^†b̂_2|1⟩|1⟩ =e^-iπ· 1· 1|1⟩|1⟩,
=-|1⟩|1⟩.
Therefore, e^-iπb̂_1^†b̂_1b̂_2^†b̂_2 is the controlled-Z gate in the Fock state basis.
§ PROPAGATION METHOD
We compare simulations based on the SBM mapping Hamiltonian, introduced by Eq. (<ref>), and simulations of quantum dynamics based on the Hamiltonian in the diabatic basis set, introduced by Eq. (<ref>) for the spin-boson model system where H_jk is defined, as follows:
Ĥ_00 = ϵ_sb + ∑_k = 1^N_nP̂_k^2/2 + 1/2ω_k^2R̂_k^2 -c_k R̂_k,
Ĥ_11 = -ϵ_sb + ∑_k = 1^N_nP̂_k^2/2 + 1/2ω_k^2R̂_k^2 +c_k R̂_k,
H_01 = H_10 = Δ_sb.
The model Hamiltonian, introduced by Eq. (<ref>), describes a vibronic system with two electronic states with energy gap 2ϵ_sb, coupled with each other by the constant coupling constant Δ_sb. Each electronic state is coupled to a bath of N_n nuclear degrees of freedom, modeled as displaced harmonic oscillators. For the k^th oscillator, the frequency {ω_k} and electron-phonon coupling coefficient, {c_k} of the nuclear modes is sampled from an Ohmic spectral density with an exponential cutoff:
J (ω) = π/2∑_k=1^N_nc_k^2/ω_kδ(ω-ω_k) 1ptN_n →∞ πħ/2ξω e^-ω/ω_c.
Here, ξ is the Kondo parameter, which determines the electron-phonon coupling strength, and ω_c is the cutoff frequency which determines the characteristic vibrational frequency. Therefore, a discrete set of N_n nuclear mode frequencies, {ω_k}, and coupling coefficients, {c_k}, are sampled from the spectral density, introduced by Eq. (<ref>) <cit.>.
The initial density matrix ρ̂(0) is assumed to be in the single-product form ρ̂(0)=σ̂(0)⊗ρ̂_n(0), where σ̂(0) denotes the reduced, electronic density operator written as a 2× 2 matrix, and ρ̂_n(0), the initial bath density operator, is assumed to be in thermal equilibrium.
For comparison with benchmark calculations, we obtain the numerically exact time-evolved density matrix ρ̂(t) by propagating the initial density matrix with the numerically exact Tensor-Train Thermo-Field Dynamics (TT-TFD) propagator<cit.>:
ρ̂(t)=e^-iĤtρ̂(0)e^iĤt.
Having computed ρ̂(t), we obtain the electronic density operator σ̂(t)=Tr_n[ρ̂(t)] by tracing out the nuclear degrees of freedom. With σ̂(0) initialized according to different electronic distributions, and with their corresponding σ̂(t) propagated with TT-TFD, we obtain the Liouville space superoperator P.
Next we show how to reduce the dimensionality of the non-unitary time evolution super-operator of the spin-boson model to obtain the population-only super-operator as in Eq. (<ref>).
We note that for the full time evolution operator,
σ_jj^full(t) = ∑_l,m = 1^N_e G_jj,lm^full(t)σ_lm^full(0).
When the initial state is diagonal (i.e., σ_jk (0)= 0 for k ≠ j), Eq. (<ref>) can be simplified, as follows:
σ_jj^full(t) = ∑_l = 1^N_e G_jj,ll^full(t)σ_ll^full(0).
For the populations-only propagator,
σ_jj^pop(t) = ∑_l = 1^N_e G_jj,ll^pop(t)σ_ll^pop(0).
Because σ_jj^full(t) must be equal to σ_jj^pop(t) when exact input methods are used, we can set the right hand sides equal to each other, as follows:
∑_l = 1^N_e G_jj,ll^full(t)σ_ll^full(0) = ∑_l = 1^N_e G_jj,ll^pop(t)σ_ll^pop(0)
G_jj,00^full(t)σ_00(0) + G_jj,11^full(t)σ_11(0) + ... = G_jj,00^pop(t)σ_00(0) + G_jj,11^pop(t)σ_11(0) + ... .
Therefore, G_jj,kk^full(t) = G_jj,kk^pop(t).
For the spin-boson model, G^pop(t) is a 2× 2 time-dependent matrix.
To obtain the populations-only G^pop(t) matrix, we can extract the four corner elements of G^full(t):
([ G^full_00,00(t) G^full_00,01(t) G^full_00,10(t) G^full_00,11(t); G^full_01,00(t) G^full_01,01(t) G^full_01,10(t) G^full_01,11(t); G^full_01,00(t) G^full_10,01(t) G^full_10,10(t) G^full_10,11(t); G^full_11,00(t) G^full_11,01(t) G^full_11,10(t) G^full_11,11(t) ])
⟹([ G^pop_00,00(t) G^pop_00,11(t); G^pop_11,00(t) G^pop_11,11(t) ])
In this model, the electronic populations can be propagated using the four corner elements of G(t), as follows:
([ σ_11(t); σ_22(t) ])
= ([ G_11,11(t) G_11,22(t); G_22,11 G_22,22(t) ])
([ σ_11(0); σ_22(0) ]).
§ CODE AVAILABILITY
The python code for the SBM-SNAIL simulation of the dynamics for the FMO 4-site model is available at: https://github.com/NingyiLyu/SBM-mapping.
|
http://arxiv.org/abs/2306.04091v2
|
20230607012448
|
1st Place Solution for PVUW Challenge 2023: Video Panoptic Segmentation
|
[
"Tao Zhang",
"Xingye Tian",
"Haoran Wei",
"Yu Wu",
"Shunping Ji",
"Xuebo Wang",
"Xin Tao",
"Yuan Zhang",
"Pengfei Wan"
] |
cs.CV
|
[
"cs.CV"
] |
1st Place Solution for PVUW Challenge 2023: Video Panoptic Segmentation
Tao Zhang^1 Xingye Tian^2 Haoran Wei^1 Yu Wu^1 Shunping Ji^1 Corresponding author.
Xuebo Wang^2 Xin Tao^2 Yuan Zhang^2 Pengfei Wan^2
^1Wuhan University ^2Y-tech, Kuaishou Technology
July 31, 2023
=======================================================================================================================================================================================================================
Video panoptic segmentation is a challenging task that serves as the cornerstone of numerous downstream applications, including video editing and autonomous driving. We believe that the decoupling strategy proposed by DVIS enables more effective utilization of temporal information for both "thing" and "stuff" objects. In this report, we successfully validated the effectiveness of the decoupling strategy in video panoptic segmentation. Finally, our method achieved a VPQ score of 51.4 and 53.7 in the development and test phases, respectively, and ultimately ranked 1st in the VPS track of the 2nd PVUW Challenge. The code is available at https://github.com/zhang-tao-whu/DVIShttps://github.com/zhang-tao-whu/DVIS.
§ INTRODUCTION
Video Panoptic Segmentation (VPS) is a challenging task that is extends from image panoptic segmentation <cit.>. VPS aims to simultaneously classify, track, segment all objects in a video, including both things and stuff. Due to its wide application in many downstream tasks such as video understanding, video editing, and autonomous driving, VPS has received increasing attention in recent years.
Video panoptic segmentation can be interpreted as a fusion of video semantic segmentation and video instance segmentation <cit.>. With the rapid development of deep learning, numerous exceptional works have emerged in the field of both video semantic segmentation and video instance segmentation. Recent research efforts, such as <cit.>, aim to enhance segmentation quality and temporal consistency in video semantic segmentation. On the other hand, video instance segmentation methods such as <cit.> are geared towards improving the quality of instance segmentation and the robustness of instance association. However, it is difficult to directly apply these methods to video panoptic segmentation due to specific designs that vary greatly. Currently, there are few works <cit.> focusing on video panoptic segmentation, and most of them are simply extended from image panoptic segmentation methods <cit.>.
Recently, DVIS <cit.> has decoupled the task of video instance segmentation into three independent sub-tasks: image instance segmentation, tracking/alignment, and refinement. In addition, DVIS has designed a referring tracker and a temporal refiner to achieve stable tracking and optimal utilization of temporal information, which has demonstrated great advantages in the VIS field. A natural question is whether DVIS can be used for video panoptic segmentation or universal segmentation. In this report, we found that DVIS exhibits equally excellent performance on stuff and thing objects. Stuff objects have no difference with thing objects, and their deformation in the temporal dimension is lighter and motion trajectory is simpler. Therefore, DVIS has achieved state-of-the-art performance in video panoptic segmentation without any modification.
Thanks to the superior performance of DVIS, we achieved first place in the VPS track of the 2nd PVUW challenge in CVPR 2023. DVIS achieved 51.5 VPQ during development and 53.7 VPQ during testing, without using any additional training data (including the validation set of VIPSeg).
Thanks to the superior performance of DVIS, we were able to achieve first place in the VPS track of the 2nd PVUW challenge at CVPR 2023. DVIS achieved 51.5 VPQ and 53.7 VPQ in the development and test phases without using any additional training data, including the validation set of VIPSeg.
§ METHOD
DVIS propose a novel decoupled framework for Video Segmentation that consists of three independent components: a segmenter, a referring tracker, and a temporal refiner, as illustrated in <ref>.
The segmenter in our framework, as illustrated in Section <ref>, is introduced. The referring tracker is described in Section <ref>, while the temporal refiner is presented in Section <ref>.
§.§ Segmenter
DVIS employ Mask2Former <cit.> as the segmenter in our framework. Mask2Former is a universal image segmentation architecture that surpasses specialized architectures in various segmentation tasks while maintaining ease of training for each specific task. It is constructed upon a straightforward meta-architecture comprising a backbone, a pixel decoder, and a transformer decoder. Notable enhancements encompass masked attention within the Transformer decoder, which confines attention to localized features centered around predicted segments, as well as the integration of multi-scale high-resolution features that facilitate precise segmentation of small objects and regions.
§.§ Referring Tracker
The referring tracker employs the modeling paradigm of referring denoising to tackle the inter-frame correlation task. Its main objective is to leverage denoising operations to optimize the initial values and generate more accurate tracking results.
The referring tracker comprises a sequence of L transformer denoising (TD) blocks, each composed of a reference cross-attention (RCA), a standard self-attention, and a feed-forward network (FFN). <ref> illustrates the architecture of the referring tracker. It takes object queries { Q^i_seg| i ∈ [1,T] } generated by the segmenter as input and produces object queries { Q^i_Tr| i ∈ [1,T] } for the current frame that corresponds to objects in the previous frame. In this context, T represents the length of the video.
Firstly, as shown in <ref>, the Hungarian matching algorithm <cit.> is utilized to match the Q_seg of adjacent frames, as is done in <cit.>.
{Q_seg^i = Hungarian(Q_seg^i-1,Q_seg^i), i ∈ [2,T]
Q_seg^i = Q_seg^i, i=1
.
Where Q_seg represents the matched object query generated by the segmenter. Q_seg can be regarded as a tracking result with noise and serves as the initial query for the reference tracker. In order to remove noise from the initial query Q_seg^i of the current frame, the reference tracker leverages the denoised object query Q_Tr^i-1 from the previous frame as a reference.
Next, Q_seg^i is fed into the TD block, where the crucial denoising process is performed using RCA, resulting in the output Q^i_Tr.
RCA serves as the core component of the referring tracker, effectively leveraging the similarity between object representations of adjacent frames while mitigating potential confusion caused by their similarity. Given that the appearance of the same object in adjacent frames tends to be similar while its position, shape, and size may vary, initializing the object representation of the current frame with the representation from the previous frame <cit.> introduces ambiguity. To address this issue, RCA incorporates an identity (ID) mechanism, which effectively exploits the similarity between the query (Q) and key (K) to generate accurate outputs. <ref> illustrates the inspiration behind RCA and its slight modifications in comparison to standard cross-attention:
RCA(ID,Q,K,V)=ID+MHA(Q,K,V)
MHA refers to Multi-Head Attention <cit.>, while ID, Q, K, and V denote identification, query, key, and value, respectively.
Finally, the denoised object query Q^i_Tr is employed as an input for both the class head and mask head. The class head generates the category output, while the mask head produces the mask coefficient output.
§.§ Temporal Refiner
The limitations of previous offline video segmentation methods primarily stem from the limited utilization of temporal information by tightly coupled networks. Additionally, the current online methods lack a refinement step. To overcome these challenges, we propose an independent temporal refiner. This module efficiently leverages the temporal information across the entire video and refines the output generated by the referring tracker.
<ref> illustrates the architecture of the temporal refiner, which plays a crucial role in enhancing the temporal information utilized by the model. The temporal refiner takes the object query Q_Tr generated by the reference tracker as input and produces the refined object query Q_Rf by aggregating temporal information from the entire video. The temporal refiner is composed of L temporal decoder blocks connected in a cascaded operation. Each temporal decoder block consists of two key components: a short-term temporal convolutional block and a long-term temporal attention block. The short-term temporal convolutional block leverages motion information, while the long-term temporal attention block integrates information from the entire video. These components employ 1D convolutions and standard self-attention, respectively, operating on the temporal dimension.
Finally, the mask head generates mask coefficients for each object in every frame, utilizing the refined object query Q_Rf. Additionally, the class head utilizes the temporal weights of Q_Rf to predict the class and score of each object across the entire video. The temporal weighting process can be defined as follows:
Q̂_Rf=∑_t=1^n
Softmax(Linear(Q^t_Rf))Q^t_Rf
where Q̂_Rf is the temporal weighting of Q_Rf.
§.§ Loss
Specifically, our training focuses on the referring tracker and temporal refiner components, while we freeze the segmenter.
Given that the referring tracker operates on a frame-by-frame basis, its supervision relies on a loss function tailored to this approach. Specifically, object labels and predictions ŷ_Tr are matched only on the frame where the object initially appears. To expedite convergence during the early stages of training, predictions from the frozen segmenter ŷ_seg are utilized for matching instead of the referring tracker's predictions.
{σ̂ = argmin_σ∑_i=1^N ℒ_match(y_i^f(i),ŷ_σ(i)^f(i))
ŷ =ŷ_Tr if Iter ≥Max_Iter/2 else ŷ_seg.
where f(i) represents the frame in which the i_th instance first appears. ℒ(y_i^f(i),ŷ_σ(i)^f(i)) is a pair-wise matching cost, as used in <cit.>, between the ground truth y and the prediction ŷ having index σ(i) on the f(i) frame.
The loss function ℒ is exactly the same as that in <cit.>.
ℒ_Tr=∑_t=1^T∑_i=1^Nℒ(y_i^t,ŷ_σ̂(i)^t)
The temporal refiner is supervised during training using the same matching cost and loss functions as <cit.>.
The segmenter and the referring tracker are frozen during training. Consequently, the referring tracker's prediction results are employed for matching during the initial training phase, guiding the network toward accelerated convergence.
{σ̂ = argmin_σ∑_i=1^N ℒ_match(y_i,ŷ_σ(i))
ŷ =ŷ_Rf if Iter ≥Max_Iter/2 else ŷ_Tr.
where ŷ_Rf is the prediction result of the temporal refiner.
The loss function of temporal refiner is:
ℒ_Rf=∑_i=1^Nℒ(y_i,ŷ_σ̂(i))
§ EXPERIMENT
§.§ Implementation Details
In our approach, we employ Swin Large <cit.> as the backbone and Mask2Former as the segmenter for video segmentation. The segmenter, referring tracker, and temporal refiner are trained separately. We fine-tune the segmenter with COCO <cit.> pre-trained weights using image-level annotations from the training set of VIPSeg <cit.>. During referring tracker training, we freeze the segmenter and use a continuous 5-frame clip from the video as input. In the case of training the temporal refiner, we freeze both the segmenter and the referring tracker and use a continuous 21-frame clip as input. Training is carried out on the training set of VIPSeg (excluding additional data such as validation set) for 20k iterations with a batch size of 8, and the learning rate is decayed by 0.1 at 14k iterations. Multi-scale training is used to randomly scale the short side of input video clips from 480 to 800 during training. Additionally, for training the refiner, we employ a random cropping strategy involving tiles of size 608×608 from input video clips. Our training is executed on 8 NVIDIA 4090 GPUs, with fine-tuning of the segment requiring 18GB of GPU memory and taking approximately 3 hours. Furthermore, the training process for the referring tracker requires 7GB of GPU memory and takes around 7 hours. Finally, the temporal refiner necessitates 8GB of GPU memory and takes approximately 15 hours to train.
§.§ Comparison with Other Methods
In the second PVUW Challenge, we ranked first in both the development and test phases. The leaderboards for the development and test phases are displayed in tables 1 and 2, respectively. Our method achieved a VPQ of 51.4 in the development phase and 53.7 in the test phase, surpassing all other methods. Additionally, our method has significant advantages in tracking stability. Compared to VPQ1, our method's VPQ6 only decreased by 1.0 and 1.9 in the development and test phases, respectively. In contrast, the second and third-place methods in the development phase showed a decrease of 1.9, and the second and third-place methods in the test phase exhibited a decrease of 2.6 and 8.9, respectively.
§.§ Ablation Study
During the test phase, we utilized a multi-scale testing augmentation approach. The input video was scaled to resolutions of 720p and 800p, and the resulting prediction results were combined to form the final prediction outcome. This resulted in a 1.1 VPQ performance improvement.
§ CONCLUSION
We introduced DVIS to the VPS field and verified that the decoupling strategy proposed by DVIS significantly improved the performance for both thing and stuff objects. As a result, we won the championship in the VPS track of the 2nd PVUW Challenge, scoring 51.4 VPQ and 53.7 VPQ in the development and test phases, respectively.
ieee_fullname
|
http://arxiv.org/abs/2306.08138v2
|
20230613210805
|
Ergonomic-Centric Holography: Optimizing Realism,Immersion, and Comfort for Holographic Display
|
[
"Liang Shi",
"DongHun Ryu",
"Wojciech Matusik"
] |
cs.GR
|
[
"cs.GR",
"physics.optics"
] |
[
Arghya Rakshit
July 31, 2023
==================
§ INTRODUCTION
Computer-generated holography (CGH) creates 3D visuals from 2D wavefront modulation, offering unmatched potential for building accommodation-supporting near-eye displays in thin form factor <cit.>. Recent progress in machine learning, computational optics, and hardware have substantially improved CGH's image quality, computation speed, and resolution <cit.>, however, ergonomics has yet to receive systematic attention. In particular, we recognize three essential aspects of ergonomics: realism, immersion, and comfort. An ideal CGH shall produce an incoherent out-of-focus response matching how real-world objects defocus, minimize the image quality variation across the theoretical eye box to allow unrestricted pupil movement with motion parallax and reduce sensitivity to eye tracking failure, and simultaneously model high-order diffractions to eliminate optical filtering for designing a slim and comfortable display.
Recent works have tackled each of the aforementioned problems separately. Without modeling high-order diffraction, Choi et al.<cit.> and Lee et al.<cit.> used temporal time-multiplexing to achieve a natural defocus response. Chakravarthula et al. <cit.> incorporated a dynamic pupil to improve image quality at eccentric pupils in the eye box. Otherwise, Gopakumar et al. <cit.> proposed the high-order gradient descent (HOGD) algorithm to enable optical-filtering-free holographic display for 2D targets. Kim et al.<cit.> introduced pupil-HOGD for holographic eyeglasses, adding modeling of a single fixed pupil and support for multi-plane targets under unconstrained defocus responses. Despite their successes, a unified framework that simultaneously addresses the above challenges has not been fully explored.
Here, we propose ergonomic-centric holography (), an optimization framework that systematically integrates and advances the merits of previous works to improve the ergonomics of CGH. combines layered depth images (LDI)<cit.> and incoherent wave propagation<cit.> to compute a physically accurate 3D focal stack for supervising hologram optimization. An enhanced HOGD algorithm is developed to support multi-hologram optimization for time multiplexing and dynamic pupil modeling to maintain high image quality over the full eye box.
begins by rendering a focal stack that matches real-world defocus response using incoherent wave propagation. Consider a 3D scene with a thickness of and evenly spaced (for convenience) recording planes (32 in our case), the space-domain incoherent wave propagation kernel for propagating a scene point at depth to the -th recording plane is given by:
(x,y) = {}^*{}
(, )= e^i 2 π/√(1-()^2-()^2) (-/), if √(^2+^2)<1/
0 otherwise ,
where is the wavelength, is the frequency-domain band-limited coherent propagation kernel, and is an optional binary mask for space-domain filtering (e.g., conforming the kernel to produce a circular-shaped out-of-focus response, or forcing a deep depth of field). To efficiently and completely model a 3D scene, we use LDI, an advanced multi-layer RGB-depth image representation, to record both foreground and background points (see Supplement 1 for details). For each point, we perform ray tracing with occlusion processing to integrate its incoherent sub-hologram kernel (Eq. (<ref>)) at each recording plane and render the target focal stack with unquantized per-pixel depth defocus. We show superior image quality over the simple blending and masking method proposed by Lee et al. <cit.> in Fig. <ref>. In practice, we set as a circular binary mask to induce a more common circular blur spot (see Supplement 1 Fig. S16 and Video 1 for final rendered examples and focal sweeps).
enhances the HOGD algorithm with temporal multiplexing and dynamic pupil modeling. Denote the pixel pitch as , the number of orders to model as , the total number of frames to time multiplex as , the total number of circularly-shaped pupils to simultaneously optimize as , the radius of a pupil as , the support of the eye box as {x∈ℝ: +≥ x ≤-; y∈ℝ: +≥ y ≤-}, where (x,y) is the center of the pupil, and , , , are the min and max limit along x and y-axis that defines the boundary of the eye box. Throughout optimization, we maintain a set of fixed pupils that forms a uniform pupil sampling grid over the eye box, forcing the energy frequency to be structurally distributed across the whole eye box. At each iteration, we also generate = - random pupils to account for pupil variations within the lattice of the fixed pupils (see Fig. S7 for a visualization).
For the -th SLM pattern _ and -th pupil mask _, the field at a distance of is given by:
_(_ ; )=∬(, ; _) _(, ; ) e^i 2 π( x+ y) d d ,
(, ; _)=∑_j, k ∈α{e^i _}(+j/, +k/),
_(, ; )=(, ; ) (π) (π) _
_(, ; __x, __y, _) = 1, if ( - __x)^2 +( - __y)^2 < _^2
0 otherwise
Given a target incoherent focal stack {_| =1,…,}, we use gradient descent to optimize the batch of time-multiplexed holograms with objective
{_ | =1,…,}argmin∑_=1^∑_=1^‖√(1/∑_t=1^|_(_ ; _)|^2)/ - _‖,
where is an optimizable global scale to match the total field intensity with the targets, and is a non-optimizable per-pixel scale that compensates the non-uniformity of the incident illumination <cit.>, is an optional non-optimizable normalization scale that accounts for the pupil size variation (see Supplement 1 for details and improvements we made against previous works).
Our experimental setup uses a Holoeye Pluto SLM with a resolution of 1,080 × 1,920 and 8-bit phase control across visible wavelengths (see Fig. S1 for a schematic rendering). The SLM is mounted on a motorized translation stage (Thorlabs Z825B) to programmably shift position for focus control. Coherent illumination is provided by a FISBA RGBeam fiber-coupled laser with central wavelengths at 632 nm (red), 520 nm (green), and 450 nm (blue). A 4f system with lenses of 80 mm (first) and 200 mm (second) are used to relay and magnify the image to fulfill a full-frame camera sensor (Sony A7III). An optional iris (Thorlabs ID12), mounted on a manual xz-stage (Thorlabs XRN25P/M, XRN-XZ/M), is placed at the Fourier plane to mimic an eye pupil. When the first lens of the 4f system acts as an eyepiece, the central order of the red light diffraction creates an eye box of approximately 6.4× 6.4 at the Fourier plane. In unfiltered mode, the iris is absent. In pupil-mimicking mode, the iris is inserted and positioned at different locations.
During optimization, we consider the central 3 × 3 orders ( = 3). Orders higher than 3 are omitted as they contribute negligibly. For each scene, we optimize for 6 focal planes, typically chosen to have objects of interest in focus. For unfiltered results and pupil-mimicking results, we use 5 and 3 sub-frames for time-multiplexing, respectively (see more details in Supplement 1).
Figure <ref> compares experimentally captured holograms using EC-H, time-multiplexed neural holography (TM-NH) <cit.>, and HOGD <cit.> in the unfiltered mode. We use our LDI-computed focal stacks to supervise the optimization of TM-NH, as the code to generate their focal stack is not yet publicly available. outperforms TM-NH by effectively reducing replicas and rainbow-like artifacts caused by wavelength-dispersed high-order diffractions. This leads to tangibly improved image contrast while preserving the depth-dependent incoherent defocus throughout the 3D volume. Unlike TM-NH, HOGD does not suffer from high-order diffractions. However, it produces coherent defocus responses due to a lack of supervision for out-of-focus regions. For all methods, time multiplexing effectively reduces the speckle noise. Results of additional examples and focal sweep videos can be found in Supplement 1 and Video 1.
To optimize for image quality across the eye box, consider an 8×8 mm eye box given by (,,,) = (-4, -4, 4, 4), a size bigger than the theoretical maximum as we show modeling high-order diffractions effectively extends the eye box formed by the central diffraction order. We use =25, =9 (a 3× 3 grid), =16. We set r=2 as the base pupil size to form the uniform sampling grid for the fixed pupils. For the random pupils, their locations are randomly selected within the eye box, and their sizes are scaled between half and twice the base size to account for pupil variations within the fixed pupil lattice. Figure <ref> compares experimentally captured holograms obtained from , pupil-aware holography (PW-H) <cit.>, and Pupil-HOGD <cit.>. Note that the original paper of PW-H optimizes coherent defocus for their two-plane results. As Pupil-HOGD covers reproducing coherent defocus, we upgrade PW-H to reproduce incoherent defocus to emphasize other improvements made by EC-H.
At eccentric pupils, the Pupil-HOGD method suffers from a significant loss of intensity when the pupil is shifted and reduced to the extent when it fails to fully encompass the DC term. This is evident in the transition from a 6mm pupil in row 2 to a 4mm pupil in row 3, both shifted to the center top of the eye box. This loss occurs as Pupil-HOGD solely regularizes image quality at the center pupil, causing an imbalanced energy distribution in the frequency domain (see Supplement 1). Alternatively, PA-H exhibits pronounced rainbow-like artifacts with reduced contrast, a faster decay in image brightness (see row 3), and a stronger reduction in the extent of reproduced defocus blur compared to EC-H (see the orange box in column 1, row 3 versus the green box above, with the same regions in column 3). They are caused by PA-H's vulnerability to high-order diffractions and the absence of fixed pupils during optimization, which further push the energy spectrum structurally to the mid/high frequencies (see row 1 bottom right for holograms and Supplement 1 for spectrum analysis). better maintains the image intensity and quality as the pupil moves away and reduces. It also produces artifact-free images outside the central-order-created eye box. The extended eye allows the perception of noticeable motion parallax (see Supplement 1/Video 3). Additional results can also be found in Supplement 1.
In conclusion, we demonstrate can effectively improve the display ergonomics for computer-generated holography via synergizing and advancing efforts made in recent works (see discussions and limitations in Supplement 1).
Future works can be built on top of to further improve its performance. First, the space-bandwidth product (i.e., etendue) that determines the product of the eye box and field-of-view shall be further enhanced for more immersive VR/AR experiences. Recent applications of high-resolution random <cit.> or engineered <cit.> phase masks for etendue expansion can be incorporated for joint optimization. Second, can be accelerated using deep neural networks for real-time hologram generation <cit.>. Third, can be extended to model multi-color holograms <cit.> to support modulation of poly-chromatic illumination for higher image brightness without using more powerful lasers.
Acknowledgments We thank Byounghyo Lee for sharing their incoherent focal stack rendering code for comparison. L.S. is supported by Meta Research PhD Fellowship; D.R. is supported by MIT EECS Alumni Fellowship.
Disclosures The authors declare no conflicts of interest.
Data Availability Source code and data needed to evaluate the conclusions will be made timely and publicly available at: https://github.com/liangs111/ergonomic-centric-holography
Supplemental document See Supplement 1 for supporting content.
sample
aop
§ AUTHOR BIOGRAPHIES
[t][6.3cm][t]1.0
L0.25
< g r a p h i c s >
John Smith received his BSc (Mathematics) in 2000 from The University of Maryland. His research interests include lasers and optics.
1.0
L0.25
< g r a p h i c s >
Alice Smith also received her BSc (Mathematics) in 2000 from The University of Maryland. Her research interests also include lasers and optics.
|
http://arxiv.org/abs/2306.11013v1
|
20230619152341
|
A lunar reconnaissance drone for cooperative exploration and high-resolution mapping of extreme locations
|
[
"Roméo Tonasso",
"Daniel Tataru",
"Hippolyte Rauch",
"Vincent Pozsgay",
"Thomas Pfeiffer",
"Erik Uythoven",
"David Rodríguez-Martínez"
] |
cs.RO
|
[
"cs.RO"
] |
espace]Roméo Tonasso
espace]Daniel Tataru
espace]Hippolyte Rauch
espace]Vincent Pozsgay
espace]Thomas Pfeiffer
espace]Erik Uythoven
aqua]David Rodríguez-Martínez
[espace]organization=eSpace - EPFL Space Center, École Polytechnique Fédérale de Lausanne (EPFL),
city=Lausanne,
postcode=1015,
country=Switzerland
[aqua]organization=Advanced Quantum Architecture Laboratory (AQUA), École Polytechnique Fédérale de Lausanne (EPFL),
city=Neuchâtel,
postcode=2000,
country=Switzerland
An efficient characterization of scientifically significant locations is essential prior to the return of humans to the Moon. The highest resolution imagery acquired from orbit of south-polar shadowed regions and other relevant locations remains, at best, an order of magnitude larger than the characteristic length of most of the robotic systems to be deployed. This hinders the planning and successful implementation of prospecting missions and poses a high risk for the traverse of robots and humans, diminishing the potential overall scientific and commercial return of any mission. We herein present the design of a lightweight, compact, autonomous, and reusable lunar reconnaissance drone capable of assisting other ground-based robotic assets, and eventually humans, in the characterization and high-resolution mapping (∼ 0.1 m/px) of particularly challenging and hard-to-access locations on the lunar surface. The proposed concept consists of two main subsystems: the drone and its service station. With a total combined wet mass of 100 kg, the system is capable of 11 flights without refueling the service station, enabling almost 9 km of accumulated flight distance. The deployment of such a system could significantly impact the efficiency of upcoming exploration missions, increasing the distance covered per day of exploration and significantly reducing the need for recurrent contacts with ground stations on Earth.
robotics aerobot system design mapping lunar exploration extreme environments
§ INTRODUCTION
NASA has recently selected 13 candidate landing sites in the south polar region of the Moon for their Artemis III mission <cit.>, a mission aimed at sending the first group of humans to the lunar surface since the Apollo program. Prior to human exploration of the Moon and in line with the goals of the new Artemis program, a series of upcoming robotic missions spearheaded by both national space agencies and private corporations are also aiming at characterizing and prospecting a number of relevant locations on the lunar surface. Among these, south-polar Permanently and Transiently Shadowed Regions (PSRs and TSRs, respectively) and lunar skylights appear as primary candidates, potentially bearing answers to fundamental questions on the origin and formation of the Moon <cit.>, harboring valuable resources for in-situ extraction <cit.>, and providing shelter beyond Earth where humans could finally settle <cit.>. To accomplish all of the above, efficient exploration of scientifically and commercially significant locations is essential.
Efficient exploration means deploying highly autonomous robotic systems with the capacity to traverse longer distances ( 100 km) under increasingly constrained time windows (e.g., shorter day-light cycles on high-latitude regions due to the low lunar obliquity), to effectively operate under extreme environments (i.e., across unstructured, dynamic, and hazard-abundant landscapes with, at times, lack of natural illumination, cryogenic temperatures, and subject to the impact of meteorites and high-energy radiation) of which fewer and/or lower quality data are readily available, and to do so in a cost-effective manner (e.g., 6 meters/$100k invested for a single lunar mission).
One of the key enablers of efficient exploration is having access to high-resolution topographical and geomorphological data. The highest resolution images of the lunar surface acquired from lunar orbit to date have been measured by the Narrow Angle Cameras (NACs) onboard NASA's Lunar Reconnaissance Orbiter (LRO). NACs are capable of mapping regions of the Moon down to a spatial resolution of 0.5 m/px <cit.>. This is achieved, however, under optimal lighting conditions. When resolving internal features of PSRs and TSRs, the prospect of achieving this level of resolution from orbit is unlikely. Imaging shadowed and poorly-lit areas on the surface requires longer exposure times, which paired with the increased shot noise and rapid movement of the satellites drastically worsens the overall signal-to-noise ratio (SNR) of the output images <cit.>. Images taken by the LRO of unlit regions on the Moon display maximum spatial resolutions after resampling of ∼ 10 m/px <cit.>. Similar results were previously achieved by the Terrain Camera (TC) onboard JAXA's “Kaguya” Selenological and Engineering Explorer (SELENE) <cit.>. More recently, NASA's ShadowCam instrument currently operating onboard KARI's Korea Pathfinder Lunar Orbiter (KPLO) was specifically developed to capture images of PSRs at a maximum spatial resolution of 1.7 m/px <cit.>. And new learning-based image post-processing approaches, such as the Hyper-effect nOise Removal U-net Software (HORUS) <cit.> developed by a team from ETH Zurich, University of Oxford, and NASA Ames Research Center, are being devised to artificially enhance the SNR of existing data sets while achieving improved spatial resolutions (∼ 1 m/px) on long-exposure images <cit.>.
Even though new technologies and approaches are significantly improving the quality of orbital measurements, current maps of these highly relevant lunar regions are still too coarse for an optimal and efficient mission planning. Data at spatial resolutions equivalent to that of a factor of the characteristic length of the systems to be deployed—i.e., wheelbase, wheel track, or even wheel size for wheeled robots and stride or step length for legged robots and potentially humans—are required. The impossibility to resolve sub-meter hazards and/or precisely pinpoint local regions of interest from these images negatively impacts the efficacy and effectiveness of these missions, precluding the possibility to cover large distances, increasing overall mission risk, and diminishing the potential scientific or commercial return on investment on any given mission.
We present an alternative to traditional single-rover missions and previously presented concepts for long-distance coverage (see Section <ref> for details). Our concept aims at solving the issue of high-resolution data acquisition at large scales. In the following pages, we describe the outcome of a feasibility analysis and preliminary design study on the potential deployment of a lunar reconnaissance drone for exploring, characterizing, and high-resolution mapping (∼ 0.1 m/px) of targeted regions of interest.
§ BACKGROUND
The miniaturization of electromechanical components has rapidly impacted the development of small-sized, lightweight unmanned aerial vehicles (UAVs) on Earth. The spectrum of applications for which terrestrial drones are being used is constantly widening: from emergency management and surveillance <cit.> to marine monitoring <cit.>. UAVs benefit from ease of operation, fast deployment, and long-distance coverage while being economical and transportable.
Beyond Earth, the deployment of UAVs, or aerobots as they are often referred to in planetary exploration—a term that includes rotorcraft <cit.>, fixed-wing drones <cit.>, lighter-than-air vehicles <cit.>, and suborbital hoppers <cit.>—, has been a topic of discussion and conceptualization for exploring atmosphere-bearing celestial bodies ever since the first martian airplane concept was sketched at NASA's Jet Propulsion Laboratory <cit.>. Mars Helicopter Ingenuity, part of NASA's Mars 2020 mission <cit.>, has recently become the first unpiloted aircraft to perform a power-controlled flight on another planet <cit.>. Ingenuity's feat has brought about a renewed interest in the use of rotorcraft for exploration, enabling opportunities for new science, and redrawing concepts for upcoming missions to the red planet <cit.>.
On the Moon, however, aerobots demand an extra layer of complexity. The negligible atmosphere present on the Moon <cit.> requires the use of either electromechanical devices for short-distance skipping and pronking <cit.> or rocket engines for long-distance hopping and flying. In the latter category, a much lower number of concepts are described in the literature compared to that of martian aircraft.
Two studies conducted at the Massachusetts Institute of Technology (MIT) outlined a series of potential mission scenarios, operational concepts, and safe landing approaches for planetary hoppers <cit.> and described the development of TALARIS <cit.>, a lunar hopper prototyped for Earth-based testing propelled by cold-gas thrusters. Another group of students from the University of Southampton designed and tested a prototype of a Vertical Take-Off & Vertical Landing (VTVL) lunar hopper <cit.>. This hopper, dubbed Lunar Hopper Mk. II (Figure <ref>(a)), weighs 37 kg and it is mainly propelled by a single 400-N hybrid rocket engine and controlled by four nitrogen-based cold-gas thrusters. A group of spherical drones, called SphereX, has been proposed by a team from NASA's Goddard Space Flight Center for the cooperative exploration of underground lava tubes, caves, and other extreme locations <cit.>. SphereX robots are meant to be capable of rolling, hopping, and flying. With a diameter of 0.3 m and a total wet mass of just 3 kg, each SphereX has an anticipated payload carrying capacity of 1 kg and about 5 km of flight range on the Moon. Its propulsion system consists of a bi-propellant (RP1-H_2O_2) engine and eight H_2O_2-based attitude control thrusters. While extensive work has been conducted on the mobility and control of these spherical robots, questions remain unanswered as to the manufacturing and potential miniaturization of the propulsion system <cit.>. On the subject of lunar drones, Swamp Works, a group formed by engineers at NASA's Kennedy Space Center, also presented their own concept for what they called Extreme Access Flyers (Figure <ref>(b)). With a width slightly larger than 150 cm, these drones are equipped with cold-gas thrusters for take-off and landing (TOL) and attitude control <cit.>. Another concept has been introduced by Politecnico di Torino for an autonomous 12U suborbital lunar drone <cit.>. This drone would have a total estimated wet mass of 12 kg and be propelled by experimental H_2O_2-based monopropellant engines in a similar configuration to that of the SphereX (1 main, 8 for attitude control). Similar challenges associated with the miniaturization and maturation of the propulsion technology were found.
In the realm of commercial applications, Intuitive Machines, an American company founded in 2013, has recently signed a contract with NASA for the development of its μNova lunar hopper <cit.> (Figure <ref>(c)), a scaled-down version of the company's lander, Nova-C <cit.>. Once detached from the lander, the 30-to-50-kg μNova is designed to hop across PSRs and into lunar pits. The system reuses the same precision landing and hazard avoidance sensor suite and software used to land Nova-C on the lunar surface <cit.>.
§ CHALLENGES
The basic premise of our concept is founded on the current use, form factor, and operability of terrestrial UAVs while building on top of the work already conducted on lunar aerobots. We set out to design a fully autonomous, lightweight, compact, modular, adaptable, and reusable lunar drone capable of cooperating with other robotic assets or vehicles operating on the surface of the Moon. This presented the following challenges:
* Achieving full autonomy implied making the most of the limited computational capacity of existing space-qualified processing units while limiting the extent of sensory input required in flight and the complexity of the trajectories to be followed.
* For the drone to be as lightweight and compact as possible, fuel consumption had to be optimized and the amount of power required onboard needed to be heavily limited (e.g., by avoiding complex active thermal regulation systems but still being able to sustain the extreme thermal fluctuations of PSRs/TSRs <cit.>).
* Modularity, adaptability, and reusability meant being capable of hosting different instruments for different purposes, being capable of operating alongside multiple platforms in a wide array of mission scenarios, and being capable of achieving multiple flights per mission over multiple missions.
To further constrain our analysis, we grounded our study on particular features of the upcoming NASA's Volatiles Investigating Polar Exploration Rover (VIPER) Mission <cit.> and ESA's European Large Logistics Lander (EL3) <cit.>. These introduced the high-level preliminary requirements listed in Table <ref>.
§ CONCEPT OF OPERATIONS
With these challenges in mind, we envisioned a payload envelope (referred to herein as the “drone system”) formed by the drone and a so-called service station in the form of a towed trailer. In our proposed concept of operations (CONOPS), a prospecting rover approaches a region of which limited geomorphological information is available for an optimal traverse (e.g., a PSR) or one characterized by an extreme topography for the rover to access (e.g., the rim of a crater or the edge of a skylight). The rover detaches from the service station allowing its cover panels to open, revealing and releasing the support structures that hold the drone in place (see Figure <ref>). The drone is deployed, climbs to an altitude of 50 m above ground level, and proceeds to follow a predefined trajectory optimized for maximum coverage and minimum fuel consumption, flying to a maximum horizontal distance of 400 m away from the service station (flight simulations are described in Section <ref>). With the data acquired in flight, the drone returns to the original take-off location, landing safely back on the service station. This operation can then be repeated multiple times—up to 11 with our current concept—over the course of any given mission covering local areas where more or higher quality environmental information is needed. Local elevation maps of the surroundings can then be created by the rover or any other ground assets in the surroundings to more effectively characterize the area. A detailed flow chart of these operations is depicted in Figure <ref>.
The service station was devised as a necessary multifunctional element of the drone system. Its role is to act as a TOL pad, as a refueling and recharging station for the drone, as a shelter for the drone when not in operation, and as a depot for major data transmissions between the drone and the rover or any other surrounding robots or vehicles. The specifics of the design of the service station are described in Section <ref>.
One of the common drawbacks we encountered when evaluating existing concepts and mission architectures (Section <ref>) was the need to always take off, land, or hop from the ground. This has some clear benefits—longer flight range, potentially lower fuel consumption, and/or higher independence. In the case of lunar missions, however, we deemed interacting with the ground a major drawback for the following reasons: 1) its negative impact on potential surface and subsurface volatiles and other valuable elements present within the region of influence of the propulsion system <cit.>, 2) having to cope with excessive and slow-settling dust generated by firing the engines close to the ground <cit.> and its potential effect on orbiting spacecraft <cit.>, 3) the need for more sophisticated flight software solutions to enable safe autonomous landing on unknown, unstructured, uneven, and hazardous terrains affected by complex illumination conditions <cit.>, and 4) the non-negligible impact of extremely low surface temperatures (as low as 20 K within some PSRs <cit.>) on the overall size and weight of the system (e.g., the need to implement additional heaters and/or radiators). The concept of the service station came about as a potential solution that mitigates most of these issues. It enables a higher fuel-carrying capacity per mission with the addition of refueling tanks increasing its adaptability to different missions and reducing mission risks by simplifying the avionics since the drone is intended to always TOL from a well-known, flat, and dust-free location. The drone has been designed, however, to be capable of emergency landing on the ground in the event of a failure.
§ SYSTEM DESIGN
The system consists of a drone and its service station. The drone system is designed to assist other planetary robots, ground vehicles, and eventually humans operating on the surface into inaccessible environments or those of which scattered, low-resolution data is available. The drone system is designed for fast deployment and ease of operation. It is meant to be a low-cost solution that prevents excessive contamination of pristine locations with high scientific, and potentially high commercial, value. The full system (see Figure <ref>) has an overall wet mass of 100 kg and in its current configuration provides a total flight range of 9 km or a total of 11 flights without refueling the station.
§.§ Lunar Reconnaissance Drone
A high-level schematic of the different subsystems and components comprising the drone is presented in Figure <ref>. Connecting lines illustrate the different internal and external interfaces. The drone has a dimension of 450 x 480 x 378 mm and a total wet mass of 16.96 kg, of which 8.15 kg are devoted to the propulsion system alone, including 2.42 kg of total propellant and pressurant in a 2.5:1 ratio. The total estimated power consumption of the drone in flight yields 324 W. A standby mode will be used when docked with the service station keeping most of the subsystems either off or in a low-power mode. Details of the design of relevant subsystems are presented in the following sections. Space-qualified off-the-shelf components were favored to define a baseline for the design and size the system whenever possible.
§.§.§ Propulsion
The selection of the propulsion subsystem of the drone is a key driver for the full system design specifications and its operability. The type of propulsion needed had to provide enough thrust while being throttleable in the range between 10–100 N. In line with the engine technologies favored in previous designs (refer to Section <ref>), we ultimately opted for a system formed by four 22-N MR-106L monopropellant thrusters <cit.> fueled by hydrazine and a S-405 catalyst. The main specifications of these engines are listed in Table <ref>.
Monopropellant engines provide enough thrust, compared to electrical engines, while being refuelable, unlike hybrid engines. They present a good balance between simplicity and low mass compared to that of bi-propellant rocket engines and provide a higher specific impulse (ISP) than cold gas systems. A rapid and precise control of the drone also demanded a low minimum impulse bit (MIB) (≤ 80 mN·s based on preliminary simulations, refer to Section <ref>).
Unlike the 8-to-1 configuration presented by existing drone concepts, we distributed the engines similar to conventional quadcopter drones with each thruster located on top of the drone, 90-deg from each other. Different placement configurations, at times in combination with reaction wheels (RWs) and control moment gyroscopes (CMGs), were initially simulated and evaluated to find the optimal configuration. Despite the slightly higher propellant consumption of a 4-thruster system, it provides higher controllability at a lower overall mass than single thruster alternatives paired with RWs/CMGs, avoids the need for actuated thrust vector control (TVC) systems <cit.>, and allows mounting the mapping sensor at the bottom of the drone pointing nadir. The thrusters are offset 45-deg with respect to the x-axis (angle α in Figure <ref>) to enable precise yawing of the drone and they are angled 45-deg with respect to the z-axis (angle β in Figure <ref>). Fuel consumption and total flight time with different thruster angle variations were also evaluated in simulations to find the optimal configuration. While a lower β yields lower fuel consumption, the chosen 45-deg configuration provided the most efficient fuel consumption option while minimizing dust dispersion, avoiding disturbances in the measurements, and keeping the outer structure and optics of the drone away from the 38-deg high-temperature and slim high-pressure regions of the engines' exhaust (see Figure <ref>).
The drone makes also use of a regulated helium-based pressurization system to maintain constant pressure in the propellant tank while in flight. Helium is stored in a separate tank and its flow is controlled via a pressure regulator. This system provides higher control over the output pressure and resulting thrust levels compared to blowdown systems, critical for the rapid, precise control of the drone. The drone is equipped with a 1.5-mm titanium spherical bladder tank for the propellant and a 0.6-mm tank of the same material and shape for the pressurant. Tanks are sized for a single 1000-m straight flight based on <cit.>. Final specifications for the tanks and the pressurization system are listed in Table <ref> and all include 20% margins and a factor of safety (FoS) of 2 to account for potential changes in the overall mass in later iterations, in-flight correction maneuvers, and variations in the trajectory not represented in current simulations (refer to Section <ref>).
§.§.§ Mapping instrument
Five different types of sensors were initially considered: optical camera, radar, scanning LiDAR, flash LiDAR, and thermal infrared camera. We ultimately deemed the use of a flash LiDAR the best option on which to base the design of our lunar drone concept. LiDAR technology achieves higher resolution and performance under rapidly varying lighting conditions than conventional optics and radar. As mentioned before, high-signal, high-resolution images of unlit regions require longer exposure times, a high dynamic range, high frame rates, and, if the aforementioned requirements cannot be met, the use of additional light sources to artificially illuminate the scene. New technologies, such as event-driven cameras <cit.> and quantum sensing devices <cit.>, are emerging as promising new technologies with particularly high performances under conditions of poor or rapidly varying lighting and fast movement <cit.>. The readiness level for space of these technologies, however, is at the time of writing still low for our baseline design. Unlike conventional scanning or rotating LiDARs, flash LiDARs do not require any moving parts, illuminating the whole scene in single flashes. Currently, lightweight flash LiDARs ( 4 kg) are being developed for space applications and are expected to become available in the near future <cit.>. We based our design on the MILA BB model from the Swiss Center for Electronics and Microtechnology (CSEM) <cit.> with an objective mass under 2 kg and maximum power consumption of 35 W. It is important to note that in order to configure the drone, we assumed the optical elements can be physically separated from the control electronics of the flash LiDAR for better placement and a more compact configuration.
The selection of the flash LiDAR also introduced the need to fly at a constant altitude of ∼ 50 m (typical LiDAR range) and to do so at a maximum horizontal speed and a maximum pitch angle of 30 m/s and 24-deg <cit.>, respectively. The former was estimated based on the need to comply with R5 (refer to Table <ref>) alongside an expected sample rate of 300 Hz <cit.>.
§.§.§ Electrical power system
The drone has a maximum peak power consumption of 324 W. The outcome of a series of flight simulations predicts a total flight time of 140 s per flight (see Section <ref> for details), on top of which a margin of 30% (i.e., ∼180 s) was used to size the batteries. Imposing 10 battery charge/discharge cycles with a depth of discharge of 90%, the required battery capacity was estimated to be ∼21 Wh. We ultimately opted for the space-proven iEPS Electrical Power System from ISISpace containing a Lithium-ion battery pack that provides 22.5 Wh <cit.>. It is important to note that the drone was not devised to host an internal power generation system as its batteries will be charged by the service station in between flights (details in Section <ref>).
§.§.§ Avionics
For the drone avionics, we defined a centralized data architecture in which all the different electronic components are connected to an onboard computer (OBC) (see Figure <ref>). The OBC sends all commands to the active components of the drone and receives housekeeping data from pressure and temperature sensors. The OBC is also in charge of storing all mapping data and the measurements gathered in flight. We used an ISISpace 400 MHz ARM9 OBC as a reference for the design due to its very low weight and power consumption while providing up to 32 GB of storage <cit.>. The drone also makes use of a high-accuracy (biases 0.3^∘/h for the gyroscopes and 0.05 mg for the accelerometers), low noise (0.15^∘/√(h)) STIM377H inertial measurement unit (IMU) from SAFRAN to compute the drone attitude during flight <cit.>.
§.§.§ Communications
The Command & Data Handling (CDH) subsystem in charge of the communication between the drone and the service station is divided into two modes: in-flight mode and docked mode. The drone is designed to operate fully autonomously. Data is only shared with the service station, which communicates with the serviced rover. Communications with ground stations on Earth, or potentially new lunar orbiting stations, are expected to take place through the rover itself. The data acquired by the flash LiDAR—estimated to be about 20.5 GB per flight including an expected 25% compression ratio <cit.>— is temporarily stored by the drone while part of it is directly processed onboard to determine flight parameters, such as position and altitude, and to be used by the hazard detection & avoidance module during flights in more complex environments such as the inside of lunar pits. The bulk of data acquired in flight will be transferred to the service station by a high-speed data cable after each flight (docked mode). The data sent to the service station during flight is limited to drone health, position tracking, power, and propellant consumption. Basic commands sent through the service station can be also received by the drone during flight such as service station housekeeping and safety checks. For this, the drone makes use of a programmable wireless UHF radio transceiver from Nanoavionics and a Zigbee antenna operating at 440 MHz and with a maximum bandwidth of 200 kbps over a line-of-sight up to 1 km.
§.§.§ Thermal control
The thermal design of the drone is particularly challenging, with internal temperatures increasing and dropping rapidly due to the low volume available and the extreme temperatures it may be exposed to during a mission. We conducted a series of preliminary estimations of evacuated thermal power and temperature variations during flight as given by
dT = Q̇_rad+Q̇_gen/m · c_p· dt,
where Q̇_rad is the radiated heat rate, Q̇_gen represents the sum of both incident and internally generated heat, m is the drone mass, and c_p is its heat capacity (assumed to be 900 J/K). For all the calculations, the background black body radiation is estimated to be emitted at T_∞= 4 K (deep space), the drone is considered a gray body with emissivity, ϵ, and absorptivity, α, equal and constant across all wavelengths, initial uniform temperature of 5^∘C, and of a cylinder shape with a surface area of 1.021 m^2. Preliminary calculations of emitted heat transfer rates resulted in a maximum allowed joule heating from electronic components and residual heating from the firing of the thrusters of 500 W. Beyond this number, heat would not be effectively evacuated without the use of radiators. Figure <ref> displays the temperature evolution with respect to time, thermal source power, and emissivity/absorptivity, as well as a detailed temporal evolution of the drone temperature for an estimated Q̇_gen = 500 W and ϵ=α=0.8, representative of white paint.
With this in mind, the different components of the drone will be maintained within their operating temperature ranges (see Table <ref>) by means of flexible electrical Polyamide/Kapton heaters. These provide a lower weight, lower power alternative to radiators. The selection of materials needs to be carefully curated to optimize the properties of all passive components (i.e., c_p, ϵ, and α's). Multilayer insulation for the external surfaces and thermal straps inside the drone are used to effectively distribute the heat. Special paints and surface coatings can be used to control the emissivity and absorptivity of the different drone surfaces.
§.§.§ Structure and wire harness
The internal structure of the drone is inspired by the design of ESA's Copernicus Sentinel 2a satellite <cit.>. It is formed by a skeleton of three composite plates on which all the different elements of the drone are assembled. Carbon fiber legs, similar to those used by NASA's Ingenuity Helicopter <cit.>, are fixed to the bottom plate to assist during landing, help position the drone correctly once on the service station (refer to Section <ref>), and also touch down safely on the ground in the event of failure requiring an emergency landing away from the service station. The electronics are placed between the tanks of the propulsion system. They contain the optics of the flash LiDAR, the OBC, the EPS, the transceiver, and the IMU, which are all mounted on a single removable electronic stack attached to the vertical plate of the internal structure so that it can be accessed and disassembled easily, facilitating troubleshooting operations during testing. External panels covered in multilayer insulation are used to protect and thermally isolate as much as possible the internal elements of the drone from incident radiation.
§.§ Drone Service Station
As previously introduced, the service station was devised as a solution to mitigate most of the mission risks associated with a direct interaction with the lunar surface. While different use cases were initially evaluated (e.g., mounted on top of or deployed by a rover <cit.>), the final design presents a service station in the form of a 2-wheel towed trailer. This solution increases the overall mass of the drone system and could potentially impair maneuverability but simplifies interfaces, reduces risks and design complexities, and enables the parallel use of the serviced vehicle while the drone is in flight, making operations more efficient (see Figure <ref>).
The drone service station has an overall size of 92 x 92 x 106 cm and a total wet mass of 83 kg, of which 32 kg is devoted to the refueling subsystem (i.e., propellant, pressurant, tanks, fuel lines, sensors, and valves) and 20.5 kg to the batteries. The service station has been sized to allow the drone to perform ten additional 1000-m flights and to enable the whole system to stay up to 50 hours within shadowed regions in standby mode. It is worth highlighting that the system is not designed for the drone to be deployed from within PSRs or other extremely low-temperature regions. The station is powered by high-energy-density batteries providing 246.7 Wh/kg specific energy <cit.> fed by 1.29 m^2 of GaInP/GaAs/Ge triple-junction solar cells (considering 29% efficiency as a reference). The station is also equipped with the same OBC as the drone, paired with a Mercury RH3440 SSD capable of storing 440 GB of flight data—leaving enough room for all the compressed mapping data acquired over 11 flights and all the housekeeping data from both the drone and the service station—while being flight-proven, compact, and radiation tolerant over 100 krad <cit.>. A simplified version of the station architecture is depicted in Figure <ref>.
Four elements of the service station are considered key: 1) the refueling subsystem, 2) the towing interface with the serviced vehicle, 3) the TOL pad, and 4) the mobility subsystem of the trailer.
§.§.§ Refueling
A safe and automatic connection mechanism between the drone and the storage tanks is needed. Fortunately, as the space industry expands toward more sustainable solutions, so do the technologies devoted to in-orbit servicing (refueling, repair, and maintenance). In our case, the station fill and drain valve design was based on OrbitFab's Rapidly Attachable Fluid Transfer Interface (RAFTI) <cit.>. This solution can transfer two fluids independently with a maximum misalignment of 4 deg. It is fully compatible with hydrazine and helium at pressures up to 4.48 MPa and 20.68 MPa, respectively. Tanks are sized using the same materials as the drone. Apart from the tanks, the refueling subsystem makes use of a pressure regulator, similar to the one used on the drone, a fuel pump, and transducers to control pressure and temperature within the system. Depending on the exact pump selected, refueling can take between 1.5 and 90 minutes. In our case, we opted for a Flight Works 2212-M04C42 M-series pump for its low power consumption, which provides a maximum flow rate of 200 mL/min. A complete refueling of the drone tanks would, therefore, take slightly over 11 minutes.
§.§.§ Towing
The connection mechanism should be designed so that the service station can attach and detach automatically from the serviced rover as well as to fit a wide variety of ground vehicles and rovers. The towing mechanism could potentially also act as a data interface between the serviced rover and the drone system. After evaluating existing solutions <cit.> and due to the lack of flight-proven technologies for this particular use case, we proposed our own design (see Figure <ref>(b)). The ground vehicle side would feature two vertically actuated, parallel plates with concave cups to house a mating sphere in between, which is attached to the service station. To open the mechanism, the two plates would slide apart via lead screws. The data interface would be fitted in the middle by making the center of the sphere and the protruded beam hollow. The connector restricts pitch rotation to ±25-deg while maintaining free roll rotation and allowing ±80-deg yaw rotation angles to prevent potential issues associated with point turns and tight turns exercised by the towing rover. The spherical mating should compensate for a potential misalignment quite effectively and its diameter can be modified to optimize the design. The effects of dust on the degradation of materials and the performance of the mechanism were not evaluated as part of this study.
§.§.§ Take-off & landing pad
Alongside the towing mechanism, the TOL pad, its surrounding protective plates, and associated opening/closure mechanisms needed to be designed from scratch as no referenced mechanisms could be found in the literature. The service station should not only allow for TOL operations to take place safely but it should also provide a reliable solution for propellant, pressurant, power, and data to be transferred to and from the drone. Correct positioning and alignment of the drone on the pad is, therefore, key. For this, we ultimately opted for a solution that consists of a rotating base mechanism placed on an axial ball bearing and actuated by an electric motor—so as to yaw rotate the drone and align it with the valves and connectors present in the service station—paired with fixed passive pushers with free-rotating heads located at each of the external protective cover plates to translate the drone in the horizontal plane (see Figure <ref>(c)). This way, minor misalignments in the orientation and position of the drone after landing can be corrected. The main disadvantage of this solution is that the orientation of the drone after landing has to be precisely known.
An octagonal landing pad is made out of a 4.4-mm 5-layer composite panel. From top to bottom, the landing pad is formed by a 0.15-mm Ti-6Al-4V plate sandwiched between two 0.25-mm high emissivity (top) and low emissivity (bottom) ceramic coating on top of a 2-mm perforated Kapton plate and a Ti-6Al-4V honeycomb sandwiched between perforated plates of the same material. The perforated honeycomb panel configuration is used to minimize the contact surface with the pad itself, thus lowering conductivity. A low emissivity ceramic coating is placed to avoid radiating heat toward the inside of the station and, on the upper side, a ceramic coating with high emissivity is applied to
improve heat resistance. Ceramics can accumulate charges when exposed to radiation, however in this case we consider the drone cover to be closed the vast majority of the time. The landing pad, and the drone, are surrounded and protected when in standby by a set of four cover plates, which, when closed, help position the drone in place and protect it from environmental effects and, when open, act as flame diverters to minimize the interaction between the thrusters exhaust and the rest of the station, as well as the generation of dust during take-off and landing.
§.§.§ Locomotion
The goal of the locomotion subsystem is to make the service station easily towable by a ground vehicle and robust enough to surmount the irregularities of the lunar surface. The locomotion subsystem, alongside the adjustable resting foot, has been designed so that the ground clearance of the towed station can be adjusted from 0 (stowed configuration) to 30 cm and the TOL pad can be leveled flat even on slopes up to 20-deg. The adjustable height makes the whole system suitable for transit to the Moon, versatile for easier fitting with varied ground vehicles, and adaptable for effective traversability across uneven terrain profiles. The station makes use of two 20-cm passive Ti-6Al-4V wheels presenting eleven 2-cm Inconel 718 grousers and a resting fore foot. The wheels are each connected to independent control arms actuated by a 36-cm ball screw linear actuator and guided by 2 articulated links in a lozenge shape (see Figure <ref>(d)). The resting foot is used only when the towing rover unhooks from the trailer. The same ball screw mechanism is used in this case connected to a ground pad. Control arms and resting foot are both covered in flexible Tedlar film to prevent potential damages caused by lunar regolith and dust.
§ FLIGHT TRAJECTORY & CONTROL SIMULATIONS
§.§ Simulation setup
A flight simulation environment was developed using Matlab Simulink to model the drone 2D/3D kinematics and dynamics, Gazebo to visually display the drone and its environment, and ROS to communicate between the two and dynamically modify parameters such as mass flow rate. Six different modules were developed: (1) trajectory planner, (2) position control, (3) thrust control, (4) thruster simulation, (5) drone simulation, and (6) state estimator. A simplified version of the software architecture is shown in Figure <ref>.
The goal of these simulations is to determine flight trajectories, configuration parameters (thruster angles and positions), and drone kinematics/dynamics (position, orientation, velocity, and accelerations) for optimal fuel consumption and flight times. A simplified version of the drone with homogeneous mass distribution and moment of inertia over a perfectly flat ground surface with a global gravity value of 1.62 m/s^2 was used.
§.§ Flight control and propulsion dynamics
Monopropellant rocket engines use a liquid fuel contained in a pressurized tank which upon contact with a catalyst produces a high-pressure and high-temperature gas exhausted at very high velocities to generate thrust. The amount of thrust can be computed by
F = ṁ· v_e + (P_e - P_a) · A_e,
where F is the thrust produced by the engine, ṁ is the mass flow rate, v_e is the exit velocity of the exhaust gas, A_e is the exit area of the nozzle, and P_e and P_a are the exit gas pressure and the ambient pressure, respectively. Exit pressure and velocity are defined based on the pressure in the combustion chamber, P_c, the exit Mach number, M_e (see Eq. <ref>), and the specific heat ratio, γ.
A_e/A^* = (γ + 1/2)^-γ +1/2(γ-1)(1+M_e^2 γ - 1/2)^γ+1/2(γ-1)/M_e
P_e/P_c = (1 + M_e^2 γ - 1/2)^-γ/γ-1
v_e = M_e √(γ· R· T_e),
where R is the universal gas constant. The mass flow rate is controlled by a valve, simply represented in our simulation by a first-order linear model with a 90 ms time constant. Note that an important aspect for a precise modeling of the propulsion dynamics is the ignition, which is highly non-linear. Not having access nor the possibility to acquire reliable data to model this transition, we decided to exclude it from the simulations. But associated margins were added to the outcome of these simulations to size the system. The specific design parameters used for the simulations are listed in Table <ref>.
Basic functions of the flight control of the drone (i.e., TOL, stabilization, and waypoint navigation) are achieved via a cascaded Proportional Integral Derivative (PID) architecture with four independent PID controllers to track desired roll, pitch, yaw, and altitude. Position tracking of the drone in fight is implemented via two additional PIDs used to convert instant error in position to a desired roll and pitch angle. No trajectory planning was implemented at this point.
§.§ Flight trajectories and propellant consumption
We analyzed propellant consumption on a number of flight profiles, namely: a one-way ballistic hop with a maximum height of 120 m, a constant-altitude flight with purely vertical and horizontal displacements, and a mixed flight combining a ballistic TOL with a horizontal flight at constant altitude. The baseline for these trajectories is set at 400 m of total horizontal displacement, a maximum horizontal velocity of 30 m/s, and a constant flight altitude of 50 m above ground level (AGL).
The results from this preliminary analysis are gathered in Table <ref>. Since a constant flight altitude, relatively low flight velocities, and avoiding contact with the ground are of preference for high-resolution mapping of the ground surface, the ballistic trajectory was discarded despite presenting the lowest fuel consumption. The addition of a ballistic TOL to a constant altitude flight profile reduces by 23.8% total fuel consumption with a more efficient, despite slightly higher, thrust firing.
The resulting optimal trajectory consists of a semi-ballistic take-off and landing—i.e., small vertical TOL with ballistic ascent/descend; a slight alteration of the pure ballistic TOL used in the combined flight profile in Figure <ref>(a)—followed by a constant 50-m altitude flight profile. The short vertical take-off and landing (∼5 m) provide enough room for correction maneuvers, particularly during landing operations on the service station, which demand high precision and accuracy. Simulations performed with purely ballistic landing experienced an average misalignment of ∼0.5 m with respect to the original take-off location. Semi-ballistic ascents/descents allowed us to reduce fuel consumption by over 13% compared to a fully vertical TOL. The total mass of propellant consumed during flight (see Figure <ref>(b)) for a total flight distance of 800 m at a maximum pitch angle of 24-deg and a maximum horizontal speed of 16.68 m/s is 1.86 kg for a total flight time of ∼140 s.
§ EVALUATION
Figure <ref> showcases the potential impact the lunar reconnaissance drone could have if it were to be deployed alongside the upcoming NASA's VIPER mission. The orange arrow indicates the landing zone. Dark green lines defined the planned rover traverse over the 106 days of the mission. The yellow, green, and red areas correspond to different ice depths (surface, shallow, and deep, respectively). Pink arrows were added to the original image to represent single drone flights, with their length at scale for an 800-m round trip flight. We identified three different potential use cases for the drone in this particular scenario:
* At location 1, several predefined points of interest are located close to one another. Before the rover stops to examine one of them, the drone can be deployed to fly over the rest, acquire high-resolution images of the surroundings, and perform a preliminary characterization. This would allow the science team to prioritize among the points of interest (order and relevancy), a task impossible to achieve prior to the mission with the data currently available.
* At location 2, while the rover stops at its last predefined location, the drone can be already deployed to precisely map the next leg of the traverse.
* At location 3, the drone maximum range is represented as a circular area showing that it is capable of flying over the three types of ice depths in a single 140-s flight.
Given the 4.5–8 m Waypoint Driving steps of VIPER <cit.>, each requiring data and new commands to be sent and received from the ground to evaluate the path ahead, the deployment of such a system could significantly impact the efficiency of upcoming exploration missions. With the capacity to characterize and map at a high resolution a range of 400 m per drone deployment, recurrent contact with ground stations on Earth could be drastically reduced to just about one per deployment, i.e., one every ∼2 km of exploration.
§ CONCLUSION
We described the outcome of a feasibility study and preliminary design of a lunar reconnaissance drone concept aimed at the cooperative exploration of highly relevant and extreme locations on the lunar surface, in particular those of which high resolution ( 1 m/px) geomorphological data does not yet exist. We based the design on upcoming lunar mission requirements, constraints, and priorities. The system consists of a drone and its service station—a 2-wheel towed trailer adaptable to operate alongside different ground vehicles and in charge of providing the drone with a take-off and landing pad, enough propellant, pressurant, and power for additional flights, and shelter from extreme temperatures and radiation when not in operation. We described in depth the design of the drone and provided high-level specifications of the whole system while sharing low-level details of key subsystems of the service station.
The results presented showcase the feasibility of the design and its expected impact. With under 100 kg of total wet mass (inc. the service station), the drone system is capable of performing 11 flights, mapping a total horizontal distance of ∼9 km without refueling the station. Space-proven, high-TRL, off-the-shelf components were used as a reference whenever possible. The custom design of certain elements was kept to a minimum and only used when no flight-proven solution could be found in the literature.
Given the preliminary nature of these results, some limitations are worth highlighting: 1) reusability is one of the core principles of the presented concept. While hydrazine-based thrusters were chosen as a baseline for the design of the drone due to its high TRL and commercial availability, it is our intention for the design to evolve toward more sustainable and reusable engines (e.g., H_2O_2-based propellants <cit.>); 2) simplified simulations and analyses were performed to achieve rough-order estimations of certain sizing values (mass, volume, power, data). In particular, a more exhaustive thermal characterization of the system would be required in upcoming phases of the project; 3) an extensive modeling of the performance of the engines was conducted but data is still required to define and develop specific control approaches and assess potential failure modes. Additional information is necessary with respect to the engines' ignition, throttleability, and degradation of the catalyst over time, information that at times is only available through testing.
The Lunar Reconnaissance Drone concept and the results presented herein showcase the need for innovative solutions that can significantly impact the efficiency of upcoming exploration missions by providing already planned and future missions with sub-meter resolution maps of relevant regions of interest.
elsarticle-num
|
http://arxiv.org/abs/2306.06637v1
|
20230611094531
|
PACER: A Fully Push-forward-based Distributional Reinforcement Learning Algorithm
|
[
"Wensong Bai",
"Chao Zhang",
"Yichao Fu",
"Lingwei Peng",
"Hui Qian",
"Bin Dai"
] |
cs.LG
|
[
"cs.LG"
] |
PACER: A Fully Push-forward-based Distributional Reinforcement Learning Algorithm]PACER: A Fully Push-forward-based Distributional Reinforcement Learning Algorithm
1]Wensong [email protected]
[1,2]Chao [email protected]
1]Yichao [email protected]
1]Lingwei [email protected]
1]Hui [email protected]
1]Bin [email protected]
*[1]College of Computer Science and Technology, Zhejiang University, No. 38, Zheda Road, Hangzhou, 310027, Zhejiang Province, China
*[2]Advanced Technology Institute, Zhejiang University, No. 38, Zheda Road, Hangzhou, 310027, Zhejiang Province, China
In this paper, we propose the first fully push-forward-based Distributional Reinforcement Learning algorithm, called Push-forward-based Actor-Critic-EncourageR (PACER).
Specifically, PACER establishes a stochastic utility value policy gradient theorem and simultaneously leverages the push-forward operator in the construction of both the actor and the critic. Moreover, based on maximum mean discrepancies (MMD), a novel sample-based encourager is designed to incentivize exploration.
Experimental evaluations on various continuous control benchmarks demonstrate the superiority of our algorithm over the state-of-the-art.
[
[
=====
§ INTRODUCTION
Distributional Reinforcement Learning (DRL) considers the intrinsic randomness of returns by modeling the full distribution of discounted cumulative rewards <cit.>. In contrast to their counterparts that solely model the expected return, the skewness, kurtosis, and multimodality of return can be carefully captured by DRL algorithms, which usually result in more stable learning process and better performance <cit.>. The state-of-the-art (SOTA) has been achieved by DRL algorithms in various sequential decision-making and continuous control tasks <cit.>.
Recently, the thrive of DRL has also catalyzed a large body of algorithmic studies under the actor-critic framework which leverage push-forward operator to parameterize the return distribution in the critic step <cit.>.
Actually, the push-forward idea, which has played an important role in optimal transport theory <cit.> and in recent Monte carlo simulations <cit.>, incarnates an efficacious approach for modeling complicated distributions through sampling, playing a vital role in the distributional temporal-difference learning procedure of DRL <cit.>.
In this paper, we propose that adopting push-forward operator merely to the critic network, as in conventional distributional actor-critic (DAC) algorithms, is far from sufficient to achieve optimal efficacy,
as the critic and the actor are highly interlaced into each other. Concretely, DAC algorithms are two-time-scale procedures in which the critic performs TD learning with an approximation architecture, and the other way around, the actor is updated in an approximate gradient direction based on information provided by the critic <cit.>; Thus, it is reasonable to conjecture that only by adopting highly expressive push-forward operators in both parts can the procedure ignite an enhanced performance. [Indeed, we have also observed that there are some alternative ways to enhance the expressiveness of policies in the literature, for example, propose to use semi-implicit Mixture of Gaussians to model the actor policy, however, the diagonal variance simplification still hampers its modeling capability <cit.>.]
However, directly incorporating the push-forward operator to construct an actor is virtually infeasible in current DAC framework, mainly due to the following two challenges.
* Gradient Construction.
Generally, policies equipped with push-forward operator can only generate decision samples, and therefore it is impossible to explicitly calculate its density function, which would fail the policy update procedure in conventional DAC, as it requires log-density to construct the REINFORCE stochastic policy gradient <cit.>.
* Exploration Controlling. Based on maximum entropy principle <cit.>, conventional DAC algorithms highly rely on the entropy regularizer to encourage sufficient exploration during the learning process.
Nevertheless, as push-forward policies do not have an explicit density function, it is not feasible to directly calculate their entropy.
To bridge this gap, we propose a fully push-forward DRL algorithm, named Push-forward-based Actor-Critic-EncourageR (PACER) algorithm.
Our algorithm incorporates three key ingredients:
(1) an actor making decisions according to a push-forward policy transformed from a basis distribution by Deep Neural Networks (DNNs), (2) a critic modeling return distributions with push-forward operator and evaluating the policy via utility function on the return distribution, and (3) an encourager incentivizing exploration by guiding the actor to reducing a sample-based metric, specifically Maximum Mean Discrepancy (MMD), between its policy and a reference policy.
We summarize the main contributions as follows.
* PACER is the first DAC algorithm that simultaneously leverages the push-forward operator in both actor and critic networks. PACER fully utilizes the modeling capability of the push-forward operator, resulting in significant performance boost.
* A stochastic utility value policy gradient theorem (SUVPG) is established for the push-forward policy. According to it, stochastic policy gradient for PACER can be readily calculated solely with decision samples.
[SUVPG can be regarded as the policy gradient obtained under the reparameterization trick <cit.>, while the widely used REINFORCE gradient <cit.> is based on the log-derivative trick.
This suggests that SUVPG is applicable to a wide range of familiar policy gradient approaches, such as advantage variance-reduction <cit.> and natural gradient <cit.>.]
* A novel sample-based regularizer, based on MMD between the actor and a reference policy, is designed for efficient exploration in DRL. Additionally, we also implement an adaptive weight-adjustment mechanism to trade-off between exploration and exploitation for PACER.
Empirical studies are conducted on several complex sequential decision-making and continuous control tasks. Experimental results demonstrate that:
(1) the push-forward policy shows sufficient exploration ability and would not degenerate into a deterministic policy;
(2)
The push-forward policy along with sample-based regularizer suffices to ensure the superior performance;
(3) PACER surpasses other algorithms in baselines and achieves new SOTAs on most tasks.
The rest of this paper is organized as follows. We review the preliminary in Sec. 2, and present the PACER algorithm in Sec. 3.
Empirical results are reported in Sec. 4 and conclusions are drawn in Sec. 5.
§.§.§ Related Works
Return Distribution Modelling.
In the early stage of DRL, the return distribution is usually restricted to certain distribution class, such as Gaussian class or the Laplace class <cit.>.
However, this restriction may lead to significant discrepancies between the chosen distribution class and the truth, thereby introducing substantial estimation errors during the value evaluation process <cit.>.
Recently, nonparametric methods are investigated in depth, trying to reduce the estimation error <cit.>.
<cit.> proposes a categorical representation, which utilizes the discrete distribution on a fixed support to model the random return.
Later, quantile return representation, e.g. Quantile Regression Deep Q-Network (QRN) <cit.>, Implicit Quantile Network (IQN) <cit.>, Fully Parameterized Quantile Function (FQF) <cit.>, are proposed to overcome the limitation of the fixed support.
Typically, this representation leverages the push-forward operator to dynamically adjust quantiles of the return distribution, and it reveals strong expressiveness to model any complex return distributions.
Currently, the quantile representation is the principle way to model the return distribution, which has been shown to yield low value estimation errors in various studies <cit.>.
Distribution Actor Critic algorithms.
The DAC algorithms, based on a distributional version of Actor-Critic frame, have achieved the state-of-the-art performance in the DRL regime <cit.>.
The first DAC algorithm is the D4PG algorithm <cit.>, which is a distributional version of Deep Deterministic Policy Gradient (DDPG) algorithm <cit.> with categorical return distribution representation.
This method is later improved by using the quantile representation to replace the categorical representation by SDPG <cit.>.
In addition to D4PG/SDPG that utilizing deterministic policies, there is another category of entropy-regularized DAC algorithms known as Distributional Soft-Actor-Critic (DSAC) <cit.>.
DSAC algorithms leverage stochastic policies and an entropy regularizer to enhance exploration <cit.>.
Combined with the quantile representation, DSAC algorithms usually achieve better performance compared to DAC algorithms with deterministic policies <cit.>.
Utility functions in DRL.
Utility functions are commonly employed in DRL algorithms to quantify the satisfaction with an agent's policy.
Typically, there are two approaches to utilizing utility functions in DRL: (1) Reward-reshape type functions, which reshape individual reward distributions to guide policy <cit.>; And (2) Risk-measure type functions, which map the whole cumulative return distribution to a real number to generate risk-sensitive policies <cit.>.
Commonly used utility functions including:
mean-variance <cit.>, entropic criterions <cit.>, and distorted expectations <cit.>.
Albeit the selection of utility functions is highly task related, the effectiveness of leveraging utility functions in DRL algorithms has been demonstrated by various studies <cit.>.
Among existing utility functions, the Conditional Value at Risk(CVaR) <cit.> is the most widely used one, which belongs to distorted expectation family and is usually adopted to improve the robustness of DRL algorithms.
§ PRELIMINARIES
We model the agent-environment interaction by a discounted infinite-horizon Markov Decision Process (𝒮,𝒜,R,𝒫_R,𝒫_𝒮,μ_0,γ),
where 𝒮 is the state space,
𝒜 is the action space 𝒜, and we assume they are all continuous.
R(s,a) ∼𝒫_R(·|s,a) denotes the random reward on the state-action pair (s,a),
𝒫_𝒮 is the transition kernel,
μ_0 is the initial state distribution, and γ∈ (0,1) is the discounted factor.
A stationary stochastic policy π(·|s) ∈𝒫(𝒜) gives a probability distribution over actions based on the current state s.
The state occupancy measure of s w.r.t. a policy π is defined by d_μ_0^π(s) := ∑_t=0^∞γ^t 𝒫(s_t=s|μ_0,π). And the random return Z^π(s,a) ∈𝒵 of policy π from the state-action pair (s,a), as the discounted sum of rewards R(s_t,a_t) starting from s_0 = s, a_0 = a, i.e., Z^π(s,a) := ∑_t=0^∞γ^t R(s_t,a_t)|s_0 = s, a_0 = a.
Note that the classic state-action Q^π value function is actually the expectation of Z^π, where the expectation takes over all sources of intrinsic randomness <cit.>.
While under the distributional setup, it is the random return Z^π itself rather than its expectation that is being directly modelled.
The cumulative distribution function (CDF) for Z^π(s,a) is denoted by F_Z^π(s,a)(z) := 𝒫(Z^π(s,a) ≤ z), and its inverse CDF is denoted by F_Z^π(s,a)^-1(τ) := inf_z ∈ℝ{z: F_Z^π(s,a)(z) ⩾τ}.
§.§ Distributional Bellman equation
The distribution Bellman equation describes a recursive relation on Z^π(s,a), similar as the Bellman equation on the Q function <cit.>,
Z^π(s,a) 𝒟= R(s,a) + γ Z^π(S',A'),
where 𝒟= denotes the equality in distribution.
Based on (<ref>), a distributional Bellman operator can be constructed for the distributional Temporal-Difference (TD) update in DRL.
Here, we first introduce the push-forward operator and then define the distributional Bellman operator according to it.
For a continuous map T: 𝒳→𝒴, we define its corresponding push-forward operator as T_♯: ℳ(𝒳) →ℳ(𝒴), where ℳ(𝒳) and ℳ(𝒴) denotes the set of probability measures on the domain 𝒳 and 𝒴, respectively.
Specifically, given a probability measure 𝒫_1 ∈ℳ(𝒳), 𝒫_2 = T_♯𝒫_1 satisfies:
∫_𝒴 h(y) d𝒫_2(y) = ∫_𝒳 h(T(x)) d𝒫_1(x), ∀ h ∈𝒞(𝒴),
where 𝒞(𝒴) denotes the collection of all continuous bounded functions on 𝒴.
Actually, the push-forward operator associated with DNNs has been widely used in the machine learning literature to approximately generate samples for complex distributions <cit.>.
Here, we use it to define the distributional Bellman operator 𝒯_d on the random return Z^π(s,a).
Specifically, 𝒯_d:ℳ(𝒵)→ℳ(𝒵) is defined as the push-forward operator associated with the affine map f_r,γ(x) = r + γ x on x ∈ℝ, i.e.,
𝒯_d𝒫(Z^π(s,a)) = (f_R(s,a),γ)_♯𝒫(Z^π(s',a')),
where s' ∼𝒫_S(·|s,a) and a' ∼π(·|s).
Furthermore, the contraction mapping property of the 𝒯_d is shown by <cit.> when under the supreme p-Wasserstein metric w̅_p, i.e.,
w̅_p(𝒯𝒫(Z), 𝒯𝒫(Z')) ≤γw̅_p(𝒫(Z) , 𝒫(Z')).
§.§ The Implicit Quantile Network and Distributional TD Learning
Among the quantile representation of the return distribution, the Implicit Quantile Network (IQN) <cit.> is the most widely used one in DRL algorithms.
Basically, IQN utilizes the push-forward operator to transform a sample from uniform distribution U(0,1) with a DNN to the corresponding quantile values sampled from the return distribution.
Thus, we approximate the return distribution with a IQN-induced implicit quantile distribution, which is given as follows.
Given a set of sampled quantiles τ̃ = {τ_1,...,τ_N}i.i.d∼ U(0,1) sorted by τ_i<τ_i+1. The implicit quantile distribution Z_π(s,a,τ̃;θ_z), that induced by IQN with parameters θ_z, for a random return Z_π(s,a) is defined as a weighted mixture of N Diracs:
Z_π(s,a,τ̃;θ_z) := ∑_i=0^N-1(τ_i+1-τ_i)δ_z(s,a,τ̂_i;θ_z),
where z(s,a,τ̂_i;θ_z) = F_Z_π(s,a)^-1(τ̂_i;θ_z) with τ̂_i := τ_i+1+τ_i/2, and F_Z_π(s,a)^-1(·;θ_z) is the inverse CDF of Z_π(s,a).
The distributional TD learning procedure can be carried out by minimizing the following Huber quantile regression loss <cit.>,
ρ_τ^κ(δ_i j) =
{[ 1/2κ|τ-𝕀{δ_i j<0}| δ_i j^2, if |δ_i j| ≤κ ,; |τ-𝕀{δ_i j<0}| (|δ_i j|-1/2κ), otherwise. ].
In (<ref>), κ is a constent threshold, and δ_i j is the pairwise TD-errors between the implicit quantile approximation of two successive steps as follows.
δ_i j(s,a)=r(s,a)+γ z (s', a',τ̂_i; θ̂_z) - z(s,a,τ̂_j;θ_z),
where a' ∼π(·;s'), τ̂_i and τ̂_j are calculated based on two randomly sampled quantiles τ̃' and τ̃.
Note that two different IQNs, for z (·; θ̂_z) and z(·;θ_z), are adopted separately in (<ref>), which is similar to the target network trick <cit.> that commonly used in the RL literature.
§ THE PUSH-FORWARD-BASED ACTOR-CRITIC-ENCOURAGER ALGORITHM
In this section, we present our Push-forward-based Actor-Critic-EncourageR (PACER) algorithm.
We first introduce the Actor-Critic-Encourager structure of PACER.
Then we summarize the objective function for each part of PACER and establish the stochastic utility value policy gradient theorem for the policy update of the Actor.
Moreover, we also implement an adaptive weight-adjustment mechanism to trade-off between exploration and exploitation for PACER.
Finally, the relation between PACER and other DSAC algorithms is presented.
The full pseudocode for PACER is given in Algorithm <ref>.
§.§ The Actor-Critic-Encourager structure
The main structure of PACER is shown in Fig. <ref>.
It consists three main parts: an actor with push-forward policy, a critic with quantile return representation, and an encourager with sample-based metric.
§.§.§ Actor with push-forward policy
The actor of PACER is a deep neural network which acts as a push-forward operator transforming from a base distribution 𝒫(𝒳)∈ℳ(ℝ^d), where ξ∼𝒫(𝒳) and ξ∈ℝ^d, to the action space 𝒜 at a given state s in a sample-to-sample manner. That is,
a∼π(·|s;θ_π) := π(s,ξ;θ_π)_♯𝒫(ξ),
where θ_π are the parameters of the DNN, and π(s,ξ;θ_π): 𝒮×ℝ→𝒜.
Practically, an action in state s can be generated in a lightweight approach by first sampling ξ∼𝒫(ξ) and then transforming it with π(s,ξ;θ_π).
Note that this kind of push-forward distributions has been shown to have high expressiveness and modelling capability both in theory and practice<cit.>, and has been widely used in the machine learning literature to approximately generate samples for complex distributions <cit.>.
While it is easy to obtain sample from push-forward policies,
it is generally intractable to obtain its density function explicitly.
§.§.§ Critic with quantile return representation
The critic uses an IQN to push forward a sample from uniform distribution U(0,1) to the corresponding quantile values sampled from the return distribution. The return distribution approximation is maintained by the weighted mixture of N Diracs. Notes that there are two alternative ways, QRN <cit.> and FQF <cit.>, to represent quantile returns. However, QRN is designed for discrete actions, thus preclude continuous control tasks from its application.
FQF requires additional computational steps to update another network for fraction proposal, and although it can obtain benefits, the complexity it brings is not conductive to the understanding of this proposed algorithm.
To reshape the policy's random reward R(s,a), a nonlinear reward-reshape type utility function ψ(·) is adopted.
Specifically, we leverage the implicit quantile distribution Z_π(s,a,τ̃;θ_z) defined in (<ref>) to model the random return, and update it according to (<ref>).
With ψ(R(s,a)), we define the state-action utility function as
Q_ψ^π(s,a) := 𝔼_a ∼π(·|s),s_t+1∼𝒫(·|s_t,a_t)[∑_t=0^∞γ^tψ(R(s,a))|_s_0=s,a_0=a],
and state utility function as
V_ψ^π(s) := 𝔼_a ∼π(·|s)[Q_ψ^π(s,a)].
Accordingly, the utility Bellman function can be defined as
Q_ψ^π(s,a) := 𝔼_R[ψ(R(s,a))] + γ𝔼_s' ∼𝒫(·|s,a) [V_ψ^π(s')].
For a given policy π, the critic evaluate it with 𝔼_s∈μ_0 V_ψ^π(s).
We can also adopt risk-measure type utility functions in PACER, e.g.,
the distorted expectation <cit.> defined as follows.
A distortion expectation ψ : [0,1] →[0,1] is a non-decreasing continuous function with ψ(0)=0 and ψ(1)=1.
The distorted expectation of a random variable Z under distortion function ψ is given by:
∫_0^1 F_Z^-1(τ) d ψ (τ).
The distorted expectation for the random return of a given policy is defined as 𝔼_s ∼μ_0, a ∼π(·|s)∫_0^1 F_Z_π(s,a)^-1(τ) dψ(τ).
§.§.§ Encourager with sample-based metric
Previous study has provided evidence for enhancing exploration by incorporating diverse behaviors in policies <cit.>.
Building upon this idea, the encourager is constructed with the Maximum Mean Discrepancy (MMD), which incentivizes exploration by reducing MMD between agent's policy π(·|s;θ_π) and a reference policy with diverse actions.
(Maximum Mean Discrepancy).
Let ℱ be a unit ball in a Reproducing Kernel Hilbert Space ℋ defined on a compact metric space 𝒳. Then the maximum mean discrepancy between two distributions p and q is
MMD(p, q) := sup_f ∈ℱ (𝔼_x ∼ p[f(x)] -
𝔼_y ∼ q[f(y)] ).
Note that MMD has an approximation which solely requires the samples from the distributions and does not demand the density functions explicitly.
Given m-samples {x_1,...,x_m} from p and n-samples {y_1,...y_n} from q, the MMD between p and q is approximated by
D_m(p || q) :=
[
1/m^2∑^m_i,j=1 k(x_i, x_j)
+ 1/n^2∑^n_i, j=1 k(y_i, y_j)
- 2/mn∑^m, n_i, j=1 k(x_i, y_j)
]^1/2.
Here, we choose the uniform policy u(·|s) on the action space 𝒜 as the reference policy.
This uniform policy is widely utilized in the RL literature to facilitate exploration of the environment <cit.>.
We denote the sample-based regualrizer of encourager as the following expected MMD between π(·|s;θ_π) and u(·|s) by d_m(θ_π),
d_m(θ_π) := 𝔼_s∼ d_μ_0^πD_m (π(·|s;θ_π)||u(·|s)).
Therefore, the exploration capability of the policy π(·|s;θ_π) is inversely proportional to d_mθ_π.
In practical, Monte Carlo method can be used to estimate the expectation, and the samples of policy π(·|s;θ_π) are generated as described in Sec <ref>.
§.§ The Stochastic Utility Value Policy Gradient Theorem
Combining the aforementioned components together,
we obtain the objective for the policy in PACER as
J_ψ(θ_π) = 𝔼_s ∼μ_0V_ψ^π_θ(s) - α𝔼_s ∼ d_μ 0^π D_m(π(·|s;θ_π)||u(·|s)),
where α >0 denotes the regularizer weight.
By maximizing J_ψ(θ_π), the policy pursues a large expected utility and maintains exploring to reduce the MMD regularizer.
Generally, the optimization process in PACER can be divided into two steps.
We firstly leverage distributional TD learning to update the parameter of IQN in the critic.
The loss function for IQN is defined as follows, and it can be efficiently optimized using SGD method.
ℒ(θ_z) = 𝔼_s∼ d_μ_0^π, a ∼π(s,ξ;θ_π)∑_i=0^N-1∑_j=0^N'-1ρ_τ̂_i^κ(δ_i j(s,a)),
where ρ_τ̂_i^κ(δ_i j(s,a)) is defined as equation (<ref>).
Then, we optimize the parameters in the policy according to J_ψ(θ_π) by leveraging gradient ascent iteratively.
Note that the first part 𝔼_s ∼μ_0V_ψ^π_θ(s)= 𝔼_s ∼μ_0𝔼_a ∼π(·|s,θ_π)[Q_ψ^π_θ(s,a)] of J_ψ(θ_π) is non-oblivious, i.e., the randomness of π_θ affects both the choice of action a ∼π(·|s,θ_π) and the function Q_ψ^π_θ(s,a), whose gradient is generally difficult to calculate.
When the density of π_θ is calculable, we can compute its gradient according to the Stochastic Policy Gradient theorem <cit.>.
However, as it is intractable to access the density of a push-forward policy with complex DNNs, we
propose a stochastic policy gradient theorem that can be approximated only based on the samples of a policy.
For a push-forward policy π(s,ξ;θ_π) and a differentiable utility function ψ(·), the policy gradient of the state utility function 𝔼_s ∼μ_0V_ψ^π_θ(s) is given by
∇_θ_π𝔼_s ∼μ_0V_ψ^π_θ(s) = 𝔼_s ∼ d_μ 0^π, ξ∼𝒫(ξ)[∇_θ_ππ(s, ξ ; θ_π) ·∇_a Q_ψ^π_θ(s, a )|_a = π(s, ξ;θ_π)].
According to Theorem <ref>, it can be verified that the gradient of J_ψ(θ_π) is as follows.
𝔼_s ∼ d_μ 0^π, ξ∼𝒫(ξ)∇_θ_π[ π(s, ξ ; θ_π) ·∇_a Q_ψ^π_θ(s, a ) - α D_m(π(·|s;θ_π)||u(·|s))] ,
whose Monte-Carlo approximation can be efficiently calculated with only action samples from π_θ and the push-forward map π(s, ξ ; θ_π).
When a risk-measure type utility function, e.g., the distorted expectation, is used in PACER, we can let ψ in SUVPG be the identity map. Then, we obtain a fully sample-based version of the Stochastic Policy Gradient (SPG).
As a result, we can train our PACER algorithm just in the same way as the training process in DSAC for risk-measure type utility functions, by replacing the SPG estimator in DSAC with our sample-based one.
Actually, SUVPG can be regarded as the policy gradient obtained under the reparameterization trick <cit.>, while the widely used REINFORCE gradient <cit.> is based on the log-derivative trick.
This also suggests that SUVPG is applicable to a wide range of familiar policy gradient approaches, such as advantage variance-reduction <cit.> and natural gradient <cit.>.
§.§ An Adaptive Weight-Adjustment Mechanism
Inspired by the automating temperature adjustment mechanism for Maximum Entropy RL <cit.>,
we implement an adaptive mechanism to automatically adjust the weight parameter α for the Encourager.
By considering the MMD regularizer as a constraint, we can reformulate max J_ψ(θ_π) as the following constrained optimization problem:
max_θ_π 𝔼_s ∼μ_0, ξ∼𝒫(ξ)∫_0^1 F_Z(s,π(s,ξ;θ_π))^-1(τ) dψ(τ),
s.t. d_m(θ_π) ≤β.
Using Lagrange multipliers, the optimization problem can be converted into
max_θ_πmin_α≥ 0 f(θ_π,α)𝔼_s ∼μ_0, ξ∼𝒫(ξ)∫_0^1 F_Z(s,π(s,ξ;θ_π))^-1(τ) dψ(τ) + α (β - d_m(θ_π)).
The above problem can be optimized by iteratively solving the following two sub-problems: max_θ_π J_ψ(θ_π) and min_α≥ 0ℒ(α), in which
J_ψ(θ_π) = 𝔼_s ∼μ_0, ξ∼𝒫(ξ)∫_0^1 F_Z(s,π(s,ξ;θ_π))^-1(τ) dψ(τ) - α d_m(θ_π), and
ℒ(α) = α(β - d_m(θ_π)).
The constraint d_m(θ_π) ≤β restricts the feasible policy space within the realm of the reference policy.
Yet, the optimal β could be varied from different training environments, which still needs manual tuning.
Actually, an unsuitable β would greatly deteriorate the performance of the algorithm.
Accordingly, we implement a new mechanism to adaptively obtain a trade-off between α and β, thus achieving a better balance between exploration and exploitation.
Intuitively, the policy should progressively acquire knowledge during training, leading to a gradual increase in the impact of exploration.
When β is fixed, α will increase to counter the rising trend of Encourager during training.
A high α indicates that the current training period requires a larger β value, prompting the policy to increase its exploitation rate.
Conversely, a low α suggests that the current training period has an excessive β value, prompting the policy to enhance exploration by decreasing β.
Thus, we define the following objective for β
ℒ(β) = β[sign(α_max - α) + sign(α_min - α)].
The parameters α_max and α_min that are suitable to PACER, can be set over a wide range, making them easy to configure practically.
§ EXPERIMENTS
A comprehensive set of experiments are conducted to demonstrate the performance of PACER on MuJoCo continuous control benchmarks.
The first is the comparison between PACER and other SOTA reinforcement learning algorithms.
Our baselines include: Implicit Distributional Actor Critic (IDAC) <cit.> (the DRL algorithm leveraging Mixture of Gaussian policy), Distributional Soft Actor Critic (DSAC) <cit.> (a distributional version of SAC using quantile regression), as well as popular RL algorithms, DDPG <cit.>, SAC <cit.>, and TD3 <cit.>.
For the baselines, we modified the code provided by SpinningUp <cit.> to implement SAC, DDPG, TD3, and we use the code from the websites provided in the original papers for IDAC and DSAC <cit.>.
The second experiment is the evaluation on the exploration capability of the push-forward policy.
Moreover, ablation study is conducted to evaluate the effect of the push-forward policy and the MMD regularizer.
At last, we also test the performance of PACER with different levels of CVaR utility function.
§.§ Settings
As suggested in <cit.>, we incorporates twin delayed networks and target networks in all the algorithms.
In all the experiments except for the last one, the neutral utility function, i.e. identity map, are adopted in all the DRL algorithms for a fair comparison.
We fix batch size and total environment interactions for all the algorithms, and other tunable hyperparameters in different algorithms are either set to their best values according their original papers (if provided) or tuned with grid search on proper intervals. We list the key hyperparameters of PACER in appendix.
All experiments are conducted on Nvidia GeForce RTX 2080 Ti graphics cards, aiming to eliminate the performance variations caused by discrepancies in computing power.
We train 5 different runs of each algorithm with 5 different random seeds. The evaluations are performed every 50 steps by calculating their averaged return. The total environment interactions are set to 1 millions and update the model parameters after collecting every 50 new samples.
§.§ Experimental results
Performance compared to SOTA.
The learning curves are shown in figure <ref> and the average final returns are listed in table <ref>. It can be observed that the proposed PACER algorithm outperforms all other algorithms across all benchmark tasks. Particularly, PACER can handle complex tasks (whose state and action dimensions are relative larger than other environments) effectively, where it gains 53.25% improvements over HumanoidStandup and 59.72% improvements over Humanoid tasks, compared to the existing SOTA. Additionally, the total score for DRL algorithms (PACER, IDAC, DSAC) are higher than the total score for Non-DRL algorithms (SAC, TD3, DDPG), which further demonstrates the advantage of modeling return distributions.
Exploration capability for Push-forward policy.
We visualize the stochastic policies at the 10000th step of PACER on the Humanstandup task in Fig. <ref>.
We focus on this task as it is the most complex task among all benchmarks.
Specifically, we sample 100000 actions from the push-forward policy in a given state and create a heat maps on the (1,2) (9,10) and (14,15) dimensions over the total 17 dimension, respectively.
It can be observed that
the push-forward policy shows sufficient exploration ability even in the midst of the PACER training (10000th over the 20000 total steps) and would not degenerate into a deterministic policy.
Ablation studies: significance and effect for each component.
In Fig. <ref>, we shows the training curves of PACER, DSAC, IDAC and the ablated DRL algorithms derived from PACER on HumanoidStandup task.
Detailed information of each algorithm is shown in table <ref>.
The results exhibit the significance and effect of adopting push-forward policies and leveraging the MMD regularizer in continuous control tasks.
We can see PACER, which leverages both MMD regularizer (M1) and push-forward policy (P1), outperforms all ablated algorithms that leverage one/none of the push-forward policy and MMD regularizer.
Besides, the results also reveal that:
(1) The MMD regularizer is also suitable to enhance exploration for the Guassian type policies as M1P0 achieves the next highest score.
(2) The absence of these crucial components significantly increases the probability of low performance or even failure. These findings offer compelling evidence for the effectiveness and significance of incorporating the push-forward policy and MMD regularizer within DRL algorithms.
Effectiveness for Utility function.
In this part, we follow the same idea as <cit.> to show the performance of PACER with different levels of CVaR utility function.
Specifically, we modify the reward function in the HalfCheetah task as R_t(s, a) = r̅_t(s, a) - 70 𝕀_v > 4·ℬ_0.1, where R_t(s, a) is the modified reward, r̅_t(s, a) is the original reward, and v is the forward velocity.
This modification will penalize high velocities (v > 4) with a Bernoulli distribution (ℬ_0.1), which represents rare but catastrophic events.
We leverage CVaR with level = 0.25,0.5,0.75,0.90,1 as our utility functions. The results are shown in Fig. <ref>. It is evident that the policy with a 0.75-CVaR outperforms the risk-neutral policy (1-CVaR), since the actor employing the 0.75-CVaR policy demonstrates risk aversion towards the infrequent yet catastrophic event that robot breakdowns. The result shows that PACER with proper utility functions has the ability to obtain risk-sensitive policies.
§ CONCLUSIONS
We present PACER in this paper, the first fully push-forward-based Distributional Reinforcement Learning algorithm. We simultaneously leverage the push-forward operator to model return distributions and stochastic policies, enabling them with equal modeling capability and enhancing synergetic performance. The compatible with the push-forward policies in PACER, a sample-based exploration-induced regularizer and a stochastic utility value policy gradient theorem are established.
We validate the critical roles of components in our algorithm with a detailed ablation study, and demonstrate that our algorithm is capable of handling state-of-the-art performance on a number of challenging continuous control problems.
§ APPENDIX
§.§ Proof for Theorem 1.
[Stochastic Utility Value Policy Gradient]
The gradient of the target function J_ψ(θ_π) = 𝔼_s ∼μ_0V_ψ^π_θ(s) is given by
𝔼_s ∼ d_μ 0^π, ξ∼𝒫(ξ)[∇_θ_ππ(s, ξ ; θ_π) ·∇_a Q_ψ^π_θ(s, a )|_a = π(s, ξ;θ_π)].
According to the definition of J_ψ(θ_π), its gradient can be written as
∇_θ_πJ_ψ(θ_π) = ∇_θ_π𝔼_s ∼μ_0V_ψ^π_θ(s)
= ∫_𝒮μ_0(s) ∇_θ_π V_ψ^π_θ(s) ds.
Thus we focus on the gradient of V_ψ^π_θ(s).
∇_θ_π V_ψ^π_θ(s) = ∇_θ_π∫_𝒳𝒫(ξ) Q_ψ^π_θ(s, π(s, ξ ; θ_π) ) dξ
= ∫_𝒳𝒫(ξ) ∇_θ_πQ_ψ^π_θ(s, π(s, ξ ; θ_π) ) dξ,
where the gradient of Q_ψ^π_θ(s, π(s, ξ ; θ_π) ) can be calculated by,
∇_θ_πQ_ψ^π_θ(s, π(s, ξ ; θ_π) )
= ∇_θ_π [ψ(R(s,π(s, ξ ; θ_π))) + γ𝔼_s' ∼𝒫(·|s,π(s, ξ ; θ_π))V_ψ^π_θ(s')]
= ∇_θ_πψ(R(s,π(s, ξ ; θ_π))) + γ∇_θ_π∫_𝒮𝒫(s'|s,π(s, ξ ; θ_π))V_ψ^π_θ(s')ds'
= ∇_θ_ππ(s, ξ ; θ_π) ∇_aψ(R(s,a))|_a = π(s, ξ;θ_π)
+ ∫_𝒮γ∇_θ_ππ(s, ξ ; θ_π) ∇_a𝒫(s'|s,π(s, ξ ; θ_π))|_a = π(s, ξ;θ_π) V_ψ^π_θ(s')ds'
+ ∫_𝒮γ𝒫(s'|s,π(s, ξ ; θ_π)) ∇_θ_π V_ψ^π_θ(s')ds'
= ∇_θ_ππ(s, ξ ; θ_π) ∇_a [ψ(R(s,a)) + ∫_𝒮γ𝒫(s'|s,π(s, ξ ; θ_π)) V_ψ^π_θ(s')ds' ]|_a = π(s, ξ;θ_π)
+ ∫_𝒮γ𝒫(s'|s,π(s, ξ ; θ_π)) ∇_θ_π V_ψ^π_θ(s')ds'
= ∇_θ_ππ(s, ξ ; θ_π) ∇_a Q_ψ^π_θ(s,a)|_a = π(s, ξ;θ_π) + ∫_𝒮γ𝒫(s'|s,π(s, ξ ; θ_π)) ∇_θ_π V_ψ^π_θ(s')ds'.
By substituting this back into (<ref>), we have
∇_θ_π V_ψ^π_θ(s) = ∫_𝒳𝒫(ξ) ∇_θ_ππ(s, ξ ; θ_π) ∇_a Q_ψ^π_θ(s,a)|_a = π(s, ξ;θ_π)dξ
+ ∫_𝒳𝒫(ξ) ∫_𝒮γ𝒫(s → s',1,π_θ) ∇_θ_π V_ψ^π_θ(s')ds' dξ ,
where 𝒫(s → s',1,π_θ) indicates the probability that s transforms to s' in one step with policy π_θ. We can see that ∇_θ_π V^π_θ(s) have an iteration property, thus we can obtain that ∇_θ_π V_ψ^π_θ equals to
𝔼_ξ∼𝒫(ξ)∫_𝒮∑_t=0^∞γ^t 𝒫(s → s',t,π_θ) ∇_θ_ππ(s', ξ';θ_π) ∇_a' Q_ψ^π_θ(s',a')|_a' = π(s', ξ';θ_π)ds'.
As a result, we can conclude that ∇_θ_πJ_ψ(θ_π)
= ∫_𝒮μ_0(s) 𝔼_ξ∼𝒫(ξ)∫_𝒮∑_t=0^∞γ^t 𝒫(s → s',t,π_θ) ∇_θ_ππ(s', ξ';θ_π) ∇_a' Q_ψ^π_θ(s',a')|_a' = π(s', ξ';θ_π)ds' ds
= 𝔼_s ∼ d_μ 0^π, ξ∼𝒫(ξ)[∇_θ_ππ(s, ξ ; θ_π) ·∇_a Q_ψ^π_θ(s, a )|_a = π(s, ξ;θ_π)]
§.§ Implementation Details
We use the following techniques in Mujoco environments for training stability, all of them are also applied to baseline algorithms for fair comparisons.
* Observation Normalization: in mujoco environments, the observation ranges from -∞ to ∞. We normalize the observations by clip((s - μ̂_s)/(max(σ̂_s)), -5, 5),
where μ̂_s is the mean of observations and σ̂_s is the standard deviation of observations.
* Reward Scaling: the reward signal for the environment HumanoidStandup is too large, so we shrink it for numerical stability. Notice that the change only reacts on training period, all testing experiments are carried out on the same reward signals.
|
http://arxiv.org/abs/2306.12229v1
|
20230621124340
|
Thermoelectric transport across a tunnel contact between two charge Kondo circuits
|
[
"T. K. T. Nguyen",
"H. Q. Nguyen",
"M. N. Kiselev"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.mes-hall"
] |
[email protected]
Institute of Physics, Vietnam Academy of Science and Technology, 10 Dao Tan, 118000 Hanoi, Vietnam
Institute of Physics, Vietnam Academy of Science and Technology, 10 Dao Tan, 118000 Hanoi, Vietnam
The Abdus Salam International Centre for Theoretical Physics, Strada
Costiera 11, I-34151, Trieste, Italy
Following a theoretical proposal on multi-impurity charge Kondo circuits [T. K. T Nguyen and M. N. Kiselev, Phys. Rev. B 97, 085403 (2018)] and the experimental breakthrough in fabrication of the two-site Kondo simulator [W. Pouse et al, Nat. Phys. (2023)] we investigate a thermoelectric transport
through a double-dot charge Kondo quantum nano-device in the strong coupling operational regime. We focus on the fingerprints of the non-Fermi liquid and its manifestation in the charge and heat quantum transport. We construct a full-fledged quantitative theory describing crossovers between different regimes of the multi-channel charge Kondo quantum circuits and discuss possible experimental realizations of the theory.
Thermoelectric transport across a tunnel contact between two charge Kondo circuits
M. N. Kiselev
July 31, 2023
==================================================================================
§ INTRODUCTION
Thermoelectric materials have been investigated in recent years thanks to their ability to generate electricity from waste heat or being used as solid-state Peltier coolers <cit.>. The mechanism of converting of heat into voltage known as the Seebeck effect <cit.> is associated with the emergence of the electrostatic potential across the hot and cold ends of the thermocouple <cit.> while no electric current flows through the system. The Peltier effect is manifested by the creation of the temperature difference between the junctions when the electric current flows through the thermocouple.
After theoretical predictions have been suggested that the thermoelectric efficiency could be greatly enhanced through nano-structural engineering in the mid-1990s, many complex nano-structured materials were studied in both theory and experiment <cit.>. Nano-electric circuits based on one or a few quantum dots (QDs), which
are highly controllable and fine-tunable, can provide important information about the effects of strong electron-electron interactions, interference effects and resonance scattering on the quantum charge, spin and heat transport.
One of the fundamental motivations of the thermoelectric studies is
to enhance thermoelectric power (absolute value of the Seebeck coefficient, TP). It is a challenge for both experimental fabrication of devices and theoretical suggestions for efficient mechanisms of heat transfer. In fact, the theoretical investigations showed that the TP of a single electron transistor (SET) was greatly enhanced in comparison with those of bulk materials <cit.>.
Furthermore, the charge Kondo effect <cit.> dealing with the degeneracy of the charge states of the QD (which is similar to the conventional Kondo effect <cit.> but does not require the system to have magnetic degree of freedom) can be a
tool for intensification of the TP of a SET <cit.>. The building block of a charge Kondo circuit (CKC) is a large metallic QD strongly coupled to one (or several) lead(s) through an (or several) almost transparent single-mode quantum point contact(s) [QPC(s)]. In the orthodoxal charge Kondo theory <cit.>, the electron location (namely, in or out of QD) is treated as an iso-spin variable, while two spin projections of electrons
are associated with two (degenerate, in the absence of external magnetic field) conduction channels in the conventional Kondo problem. External magnetic field lifts out the channel degeneracy resulting in a crossover from two channel Kondo (2CK) regime at the vanishing magnetic field to the single channel Kondo (1CK) regime at the strong external field <cit.>. As a result, the behavior of the system continuously changes from non-Fermi liquid (NFL) to the Fermi liquid (FL) states respectively. The interplay between NFL-2CK and FL-1CK regimes in thermoelectric transport through the SET has been investigated <cit.>. The charge transport in 1CK and 2CK regimes were studied extensively numerically in Refs.<cit.>. The effects of the electron-electron interactions in the charge Kondo simulators have been considered recently<cit.>.
Recently, CKCs operated in the integer quantum Hall (IQH) regime have been implemented in breakthrough experiments <cit.>. With the advantage that the number of Kondo channels is determined by the number of QPCs attached to the metallic QD, these experiments have opened an access to investigation of the multi-channel Kondo (MCK) problem experimentally. The dominant characteristic of a specific MCK setup is a NFL picture <cit.> which
is associated with Z_M symmetry. For instance, the NFL-2CK <cit.>
is explained by Majorana fermions <cit.>, the
NFL-3CK physics is related to Z_3 parafermions <cit.>.
Therefore, switching between Z_2k+1 and Z_2k low temperature
fixed points by controlling the reflection amplitudes of the QPCs,
can provide a route to investigate the crossovers between states with
different parafermion fractionalized zero modes <cit.>.
As a CKC is considered as an artificial quantum simulator for the technology of quantum computer, scaling up the CKCs to clusters or lattices is challenging and it is important to understand the nature of the coupling between neighboring QDs. For this motivation, the experiment <cit.> has implemented a two-island charge Kondo device in which two QDs are coupled together and each one is also strongly coupled to an electrode through a QPC. The authors investigated the quantum phase transition at the triple point where the charge configurations are degenerate. Being more than the two-impurity Kondo (2IK) model, the two-site charge Kondo circuit is relevant to the Kondo lattice systems. Furthermore, the deeper theoretical investigation of the strong central coupling of this setup <cit.> in the Toulouse limit showed that a Z_3 parafermion emerging at the critical point, was already present in the experimental device of Ref. <cit.>.
In this work, we revisit the model proposed in Ref. <cit.>
which contains a tunnel contact between two CKCs where each one is
set up in either FL or NFL state (see Fig. <ref>) with a two-fold goal. First, we examine the behavior of TP in order to find a mechanism to enhance it. Secondly, we show the existence of Majorana fermions in this double charge Kondo circuit (DCKC). The nonperturbative solution is obtained which allows to monitor and control all FL-NFL crossovers. The new energy scale associated with the inverse lifetime of the emergent Majorana fermions controls four different regimes of thermoelectric transport based on the window of parameters. Moreover, the non-perturbative results reported in this paper complete a full-fledged theory of the non-Fermi liquid to Fermi liquid crossover in the DCKC.
The paper is organized as follows. We describe the proposed experimental setup and the theoretical model in Sec. II. General equations for the thermoelectric coefficients are presented and the nonperturbative solution is discussed in Sec. III. The Sec. IV represents the correlation function in different cases. The main results are discussed in Sec. V. We conclude our work in Sec. VI.
§ PROPOSED EXPERIMENTAL SETUP AND THEORETICAL MODEL
We consider a DCKC device (see Fig. <ref>) formed by two CKCs
describing a very recent experiment <cit.>. The building
block for each CKC is a QD-QPC structure implemented in experiment
<cit.>. The QD is a large metallic island (the dark-red and
blue cross-hatched areas surrounded by the black lines) electronically connected to a two-dimensional electron gas (2DEG, the orange and
grey continuous areas). The 2DEG is connected to two large electrodes
through two QPCs. Applying a strong magnetic field perpendicular to
the 2DEG plane can control the 2DEG in the IQH regime at the filling
factor ν=1. The QPCs are fine-tuned (by field effects in the split gates illustrated by the blue boxes) to the high transparency regime
corresponding to weak backscattering of the chiral edge mode
(red solid lines with arrows). We investigate the regime of equal reflection amplitudes at two QPCs in each CKC:
|r_11|=|r_12|=|r_1| and |r_21|=|r_22|=|r_2|.
Therefore, each CKC is a 2CK setup. Indeed, the CKC can be tuned into a 1CK model by simply deactivating one of the two QPCs in it. These two CKCs are connected together by a weak tunneling (barrier, weak link) between two QDs. In order to study the thermoelectric transport through the DCKC system, the left CKC is set up at higher temperature T+Δ T in comparison with the right circuit, which is at temperature T. The temperature drops at the central weak link.
The spinless Hamiltonian describing the two CKCs coupled weakly at
the center in which each QD is coupled strongly to the lead through
two QPCs (Fig. <ref>) has the form H=H_L+H_T+H_R,
where
H_T=(td_1^†d_2+h.c.).
describes the tunneling between two dots, d_j stands for the
electrons in the dot j, j=1,2. The Hamiltonian H_L/R describing
each (QD-QPC) structure has the form H_L/R=H_0L/R+H_CL/R+H_sL/R.
The Hamiltonian H_0,j (j=L,R) describing the propagation of
the edge states is given by
H_0,j = -iv_F∑_α=1,2∫_-∞^∞dx[ψ_R,j,α^†(x)∂_xψ_R,j,α(x).
.-ψ_L,j,α^†(x)∂_xψ_L,j,α(x)],
where ψ_L/R,α,j represents the incoming/outgoing chiral
fermions at the QPC α of the CKC j, and v_F is the
Fermi velocity. For simplicity we assume that the Fermi velocity is the same for all QPCs. Note that the operator d_j can be expressed through
the fermionic operators ψ_j,α as d_j=∑_α=1,2ψ_j,α(-∞).
The Hamiltonian H_C,j characterizes the Coulomb interaction in
the dot <cit.>
H_C,j=E_C,j∫_0^∞dx[n_j-∑_α=1,2ψ_j,α^†(x)ψ_j,α(x)-N_j]^2,
where E_C,j is the charging energy of the QD j. The number
of electrons entering the dot (taking values 0,1 in units of e)
through the weak link and the QPCs is demonstrated by the operator
n_j and the second term in the parentheses (with ψ_j,α=ψ_L,j,α+ψ_R,j,α)
respectively. N_j is the normalized gate voltage, controlled
by plunger gates (not shown in Fig. <ref>). The Hamiltonian
H_s,j describing the backward scattering at the QPCs, with the
reflection amplitude r_j,α, writes
H_s,j =-D/π∑_α=1,2|r_j,α|[ψ_R,j,α^†(0)ψ_L,j,α(0)+h.c.],
D is a bandwidth.
The appropriate technique to describe the interacting electrons
in the QD and QPCs is the bosonized representation <cit.>. The detailed bosonization of the Hamiltonian H can be read in Refs. <cit.>. One should notice that the fermionic fields are related to the bosonic field at the QPC α of the CKC j as ψ_L/R,j,α(x)∼ e^-iϕ_j,α(x). The actions in the bosonic language are presented in the Section <ref>.
§ GENERAL FORMULAS FOR CURRENT, ELECTRIC CONDUCTANCE AND THERMOELECTRIC COEFFICIENT
In order to study the thermoelectric effects in the DCKC with a small temperature drop Δ T ≪ T at the weak link between two QDs, we consider the tunnel charge current across the tunnel contact in the tunneling amplitude |t| as
I=-2π e|t|^2∫_-∞^∞dϵ ν_1(ϵ)ν_2(ϵ)[f_1(ϵ)-f_2(ϵ)] .
Here we denote the Fermi distribution functions as f_1(ϵ)=f(ϵ,T+Δ T),
f_2(ϵ)=f(ϵ +eΔ V,T), with Δ V is an applied thermo-voltage to implement a zero-current condition for the electric current between the source and drain, and we define the densities of states ν_j(ϵ)=-(1/π)cosh(ϵ/2T)∫_-∞^∞ G_j((1/2T)+it)e^iϵ tdt
with 𝒢_j(τ)=-⟨ T_τ d_j(τ)d_j^†(0)⟩
are exact Green's Functions (GF) in the terminals j=1,2. The thermoelectric coefficients in the linear response regime are computed as follows. The electric conductance casts a form
G=.∂ I/∂Δ V|_Δ T=0=e^2|t|^2/2π T∫_-∞^∞dϵ∫_-∞^∞dt_1
× G_1(1/2T+it_1)e^iϵ t_1∫_-∞^∞dt_2 G_2(1/2T+it_2)e^iϵ t_2,
and the thermoelectric coefficient is given by
G_T=.∂ I/∂Δ T|_Δ V=0=-e|t|^2/2π T^2∫_-∞^∞dϵ ϵ∫_-∞^∞dt_1
G_1(1/2T+it_1)e^iϵ t_1∫_-∞^∞dt_2 G_2(1/2T+it_2)e^iϵ t_2.
The thermopower (or the Seebeck coefficient) in the linear regime
is defined at I=0 as
S=-.Δ V/Δ T|_I=0=G_T/G.
Following Matveev and Andreev <cit.> we define d_j(τ)=ψ_j,α^(0)(τ)F_j(τ),
where ψ_j,α^(0)(τ)=ψ_α,j^(0)(-∞,τ),
the operator F_j obeys the commutation relation [F_j,n_j]=F_j
and takes into account effects of interaction and reflection given
by Eqs. (<ref>,<ref>). Since the operators ψ_j,α^(0)
and F_j are decoupled, the GFs at imaginary times are factorized
as G_j(τ_j)=-(ν_0jπ T/sin[π Tτ_j])K_j(τ_j),
where ν_0j is the density of states in the dot j without
interaction and K_j(τ)=⟨ T_τF_j(τ)F_j^†(0)⟩
accounts for interaction effects. As a result, the electric conductance
and the thermoelectric coefficient are given by:
G=π/2G_CT∫_-∞^∞dt/cosh^2(π Tt) K_1(1/2T+it)K_2(1/2T-it),
G_T = -iπ G_C/4e∫_-∞^∞dt/cosh^2(π Tt)
×[(∂_tK_1(1/2T+it))K_2(1/2T-it).
-.K_1(1/2T+it)(∂_tK_2(1/2T-it))],
where G_C=2π e^2ν_01ν_02|t|^2 is a conductance of the central (tunnel) area. The computation
of thermoelectric coefficients in Eqs. (<ref>-<ref>) <cit.>
requires the explicit form of the correlation functions K_1,2(1/2T± it).
§ CORRELATION FUNCTION K_J(Τ):
The time-ordered correlation function K_j(τ) is defined through the operator F_j. The process where the number of electron entering the QD_j through the weak link is increased from 0 to 1 at time t=0 and decreased back to 0 at time t=τ is demonstrated by F_j(τ)F_j^†(0). Therefore, the operator n_j is replaced by n_jτ(t)=θ(t)θ(τ-t) with θ(t) is the unit step function, and the correlation function K_j(τ) is computed through the functional integration over the bosonic fields K_j(τ)=Z_j(τ)/Z_j(0).
§.§ The 1CK case: Perturbative solution:
In the case one CKC is settled down in the FL-1CK state by decoupling
one of the two QPCs, the functional integral writes
Z_j(τ) = ∫𝒟ϕ_jexp[-𝒮_0,j-𝒮_C,j(τ)-𝒮_s,j],
where 𝒮_0,j, 𝒮_C,j, and 𝒮_s,j
are Euclidean actions describing the free (non-interacting) one-dimensional Fermi gas, Coulomb blockade
in the QD and the backscattering at the QPC of the CKC j, respectively.
They are written as
𝒮_0,j = v_F/2π∫_0^βdt∫dx[(∂_tϕ_j)^2/v_F^2+(∂_xϕ_j)^2],
𝒮_C,j = E_C,j∫_0^βdt[n_jτ(t)+1/πϕ_j(0,t)-N_j]^2,
𝒮_s,j = -2D/π|r_j|∫_0^βdtcos[2ϕ_j(0,t)].
with β=1/T. One should notice that the bosonic field describing
the electrons moving through the constriction is blocked by the Coulomb
interaction in the QD. Therefore, Z_j(τ) can be computed perturbatively
over |r_j| for the small backscattering at the QPC (|r_j|≪1),
and the correlation function K_j(τ) then is
K_j(τ) = (π^2T/γ E_C,j)^21/sin^2(π Tτ)[1-2γξ|r_j|cos(2π N_j).
.+4π^2ξγ|r_j|T/E_C,jsin(2π N_j)(π Tτ)] ,
with γ=e^C, C≈0.577 is Euler's constant, ξ=1.59
is a numerical constant <cit.>.
§.§ The symmetric 2CK case: Nonperturbative solution:
For convenient calculation later, one can define the variables ϕ_j,ρ/σ=ϕ_j,1±ϕ_j,2
so-called charge/spin fields. The functional integral in this case
is written as:
Z_j(τ) = ∏_λ=ρ,σ∫𝒟ϕ_jλexp[-𝒮_0,j-𝒮_C,j(τ)-𝒮_s,j],
where 𝒮_0,j, 𝒮_C,j, and 𝒮_s,j
are Euclidean actions describing the free Fermi liquid, Coulomb blockade
in the QD and the backscattering at the QPCs of the CKC j ,
respectively. The action 𝒮_0,j is presented as a sum
of two independent actions
𝒮_0,j = ∑_λ=ρ,σv_F/2π∫_0^βdt∫dx[(∂_tϕ_j,λ)^2/v_F^2+(∂_xϕ_j,λ)^2].
The Coulomb blockade action 𝒮_C,j in bosonic representation
reads
𝒮_C,j = E_C,j∫_0^βdt[n_jτ(t)+√(2)/πϕ_j,ρ(0,t)-N_j]^2.
The contribution 𝒮_s,j in the action of each CKC characterizes
the weak backscattering at the QPCs is
𝒮_s,j=-2D/π|r_j|∫_0^βdtcos[√(2)ϕ_j,ρ(0,t)]cos[√(2)ϕ_j,σ(0,t)].
In the absence of backscattering |r_j|=0, the functional integral
Eq.(<ref>) is Gaussian. The correlator K_j^(0)(τ)≡ K_j(τ)|_r=0=K_j,ρ(τ)
is computed at low temperature T≪ E_C and at τ≫ E_C^-1:
K_j,ρ(τ)=π^2T/2γ E_C,j1/|sin(π Tτ)|.
The perturbative results (see Ref.<cit.>)
showed that the thermoelectric properties of the system are controlled
by charge and spin fluctuations at low frequencies (below E_C,j).
One should notice that the effect of small but finite |r_j| on
the charge modes is negligible in comparison with the Coulomb blockade
but it changes the low frequency dynamics of the unblocked spin modes
dramatically. The correlation function can be split into charge and
spin components as K_j(τ)=K_jρ(τ)K_jσ(τ),
with K_jσ(τ)=Z_jσ(τ)/Z_jσ(0). We simply
replace the cos[√(2)ϕ_j,ρ(0,t)] in action
Eq. (<ref>) by the ⟨cos[√(2)ϕ_j,ρ(0,t)]⟩_τ=√(2γ E_C,j/π D)cos[π N_j-χ_jτ(t)],
with χ_j(t)=π n_jτ(t)+δχ_jτ(t) , δχ_jτ(t)≈π^2T/2E_C,j[(π T(t-τ)-(π Tt)] and
obtain the effective action for the spin degrees of freedom in the
form
𝒮_τ j = ∫dx∫_0^βdtv_F/2π[(∂_tϕ_j,σ)^2/v_F^2+(∂_xϕ_j,σ)^2]
-∫_0^βdt√(4D/v_F)λ̃_jτ(t)cos[√(2)ϕ_j,σ(0,t)],
where
λ̃_jτ(t)=Λ_j(-1)^n_τ(t)cos[π N_j-δχ_jτ(t)],
Λ_j=|r_j|√(2γ v_FE_C,j/π D).
After performing the refermionization, our model [as shown in Eq.
(<ref>)] is mapped onto an effective Anderson model,
which is described by Hamiltonian
H_j,τ^eff(t)=∫[v_Fkc_j,k^†c_j,k-λ̃_jτ(t)(c+c^†)(c_j,k-c_j,k^†)]dk,
in which the operators c_j,k^† and c_j,k satisfying
the anti-commutation relations{ c_j,k,c_j,k^'^†} =δ(k-k^')
create and destroy chiral fermions; c is a local fermionic annihilation operator anti-commuting with c_j,k^† and c_j,k. We see that the model is free and equivalent to a resonant level model
where the leads are coupled to Majorana fermion η=(c+c^†)/√(2)
on the impurity. The time dependent Hamiltonian (<ref>) can
be split into H_j,0^eff+H_j,τ^'(t) by
replacing λ̃_jτ(t)→λ̃_jτ(t)/(-1)^n_τ(t).
The time-independent Hamiltonian part H_j,0^eff is H_j,τ=0^eff
while the correction is
H_j,τ^'(t)=2Λ_j{cos[π N_j]-cos[π N_j-δχ_jτ(t)]}ηζ,
with ζ=∫_-∞^∞(c_j,k-c_j,k^†)dk/√(2)
describes the Majorana fermion of the leads in the resonant level
model. Our solution, being nonperturbative in |r_j| and accounting
for low-frequency dynamics of the spin modes, leads to the appearance
of the Kondo-resonance width Γ_j in the vicinity of Coulomb
peaks
Γ_j(N_j)=8γ E_C,j/π^2|r_j|^2cos^2(π N_j).
We then compute the correlation function straightforwardly and obtain
the zero-order term corresponding to the Hamiltonian part H_j,0^eff as
K_j^(0)(1/2T+it)=π TΓ_j/γ E_C,j1/cosh(π Tt)
×∫_-∞^∞e^ω(1/2T+it)/(ω^2+Γ_j^2)(1+e^ω/T)dω,
and the first-order term when the correction Hamiltonian part H_j,τ^'(t)
is taken into account, is
K_j^(1)(1/2T+it)=-4T/E_C,j|r_j|^2sin(2π N_j)/cosh(π Tt)
×ln(E_C,j/T+Γ_j)∫_-∞^∞dωω e^ω(1/2T+it)/(ω^2+Γ_j^2)(1+e^ω/T) .
The formulas (<ref>) and (<ref>) will be used to calculate
the thermoelectric coefficients in the next Section.
§ MAIN RESULTS
The first work of the Authors <cit.> considered the weak effects of the non-Fermi liquid behaviour in the thermoelectric transport. The approach used in <cit.> is based on accounting for the perturbative corrections to
the transport off-diagonal coefficients and is limited by the perturbation theory domain of validity (high-temperature regime). These calculations, being very useful for understanding the flow towards the non-Fermi liquid intermediate coupling fixed point, neither become valid at the low-temperature regime, nor shed a light on reduction of the symmetry due to the emergency of the Majorana (parafermionic) states. The main idea of this
work is to develop a controllable and reliable approach for the quantitative description of the Fermi-to-non-Fermi liquid crossovers and interplay around the intermediate coupling fixed points. It therefore provides a complementary study of the model <cit.> and completes the theory of thermoelectrics in DCKCs.
§.§ Weak coupling between single- and two-channel charge Kondo circuits
In this case, for instance, we consider the left CKC is in the FL-1CK
state while the right CKC is in the NFL-2CK state. We apply the correlation
functions K_1(1/2T+it) and K_2(1/2T-it)
as shown in Eqs. (<ref>) and (<ref>-<ref>), respectively.
The electric conductance is obtained as
G=π^2G_CT^3/96γ^3E_C,1^2E_C,2F_G(Γ_2/T),
with F_G is a dimensionless parameter demonstrating the competition between the Kondo resonance of the right CKC Γ_2 and the temperature T. It is computed as
F_G(p_2) =∫_-∞^∞du J(p_2,u),
J(p_2,u) =[u^2+π^2][u^2+9π^2]/cosh^2(u/2)[u^2+p_2^2].
The thermoelectric coefficient is given by
G_T=-π^5ξ G_CT^3Γ_2/72eγ^2E_C,1^3E_C,2F_G(Γ_2/T)|r_1|sin(2π N_1)
-π G_CT^3/360eγ^2E_C,1^2E_C,2F_T(Γ_2/T)[2π^2ξΓ_2/E_C,1|r_1|sin(2π N_1).
.+|r_2|^2ln(E_C,2/T+Γ_2)sin(2π N_2)],
with
F_T(p_2) =∫_-∞^∞du u^2J(p_2,u).
Following the discussion in Ref. <cit.>, based on the perturbative
solution, the Seebeck effect on a weak link between 1CK and 2CK is
characterized by the competition between the Fermi and non-Fermi liquids
(see Eq. (24) in the Ref. <cit.>). However, in this part,
it is true for high temperature T≫Γ_2. At very low temperature,
T≪Γ_2, the TP behaves only the FL.
§.§.§ T≫Γ_2 limit: Fermi-liquid on the left and non Fermi-liquid on the right CKC:
At temperature T≫Γ_2 the expression in Eq. (<ref>) reproduces the perturbative result as represented in Eq. (23) of Ref.<cit.>. The TP is thus similar to the formula (24) in Ref.<cit.>:
S = -4π^3ξγ/3e|r_1|T/E_C,1sin(2π N_1)
-526γ/π^2e|r_2|^2ln(E_C,2/T)sin(2π N_2).
The crossover line separating the two contributions in the TP is defined
as follows:
789/2πξln(E_C,2/T)E_C,1/T=|r_1|/|r_2|^2.
If Γ_2≪ T≪T, NFL-2CK behavior of the TP is predicted to be pronounced. In the opposite limit, T≫T≫Γ_2, the FL-1CK regime with the weak NFL-2CK corrections is expected.
§.§.§ T≪Γ_2 limit: fully Fermi-liquid regime:
At temperature T≪Γ_2 the expression in Eq. (<ref>) induces the linear temperature term in the square brackets. The TP thus behaves the FL characteristic as
S=-7682π^3ξγ/5523e[|r_1|T/E_C,1sin(2π N_1).
.+1431/3841π^2ξ|r_2|^2ln(E_C,2/Γ_2)T/E_C,2sin(2π N_2)].
In summary, in the situation of the weak coupling between single- and
two-channel charge Kondo circuits, there exist two energy scales Γ_2 (Γ_2≤ (8γ/π^2)E_C,2|r_2|^2) and T of temperature in the regime (0,E_C,2) in which one finds more chances for the FL picture than the NFL one.
§.§ Weak coupling between two two-channel charge Kondo circuits
We take into account both K_j^(0)(1/2T+it),
we have
G=G_CT^2/24γ^2E_C,1E_C,2F_C(Γ_1/T,Γ_2/T),
with
F_C(p_1,p_2)=∫_-∞^∞dz∫_-∞^∞du F(p_1,p_2,z,u),
F(p_1,p_2,z,u)=p_1p_2u[u^2+4π^2]/sinh(u/2)[cosh(z)+cosh(u/2)]
×1/[(z+u/2)^2+p_1^2][(z-u/2)^2+p_2^2].
The integral in Eq. (<ref>) gives the zero value
when we apply K_1,2^(0)(1/2T± it) for
both sides: G_T^(0)=0. We therefore need to consider the first
order of the correlation function. We take into account, for instance,
K_1^(0)(1/2T+it)K_2^(1)(1/2T-it)
and K_1^(1)(1/2T+it)K_2^(0)(1/2T-it),
we obtain the lowest order (we consider the model in the vicinity of the intermediate coupling fixed point) non-zero contribution to thermoelectric coefficient as follows
G_T^(1)=-G_CT^3/24eγπ E_C,1E_C,2
×{|r_1|^2/Γ_1ln(E_C,1/T+Γ_1)sin(2π N_1)F_T,s(Γ_1/T,Γ_2/T).
+.|r_2|^2/Γ_2ln(E_C,2/T+Γ_2)sin(2π N_2)F_T,m(Γ_1/T,Γ_2/T)} ,
where
F_T,s(p_1,p_2)=∫_-∞^∞dz∫_-∞^∞du(z+u/2)zF(p_1,p_2,z,u),
F_T,m(p_1,p_2)=∫_-∞^∞dz∫_-∞^∞du(z-u/2)zF(p_1,p_2,z,u).
The Eqs. (<ref>-<ref>) are the central results of this part. By varying parameters such as temperature, gate voltages, and/or reflection amplitudes at the QPCs, one can achieve four different regimes of the thermoelectric transport. The details of the calculations for the electric conductance and the thermal coefficient are represented in the Appendix. We show the formulas for the TP in each regime in four segments below.
§.§.§ T≫(Γ_1, Γ_2), fully non-Fermi-liquid regime:
The TP demonstrates the weak NFL behavior at “high” temperature: T≫(Γ_1, Γ_2) as
S = -3π^2γ/16e[|r_1|^2ln(E_C,1/T)sin(2π N_1).
.+|r_2|^2ln(E_C,2/T)sin(2π N_2)].
The similarity between Eq. (<ref>) and Eq. (28) of Ref. <cit.> implies that the regime T≫(Γ_1, Γ_2) reproduces the perturbative result. The Kondo-resonance Γ_j is equal to zero at the Coulomb peaks and increased when the gate voltage N_j goes out of the half integer values. This situation occurs at the centre of the (N_1, N_2) window (if one considers 0≤ N_1,N_2 ≤ 1). Due to the logarithmic dependent on temperature but small value of TP [see Eq. (<ref>)] it is so-called a weak non-Fermi–liquid picture. The maximum value of TP S_max∼ |r_1|^2ln(E_C,1/T)+|r_2|^2ln(E_C,2/T) is reached when N_1=N_2=0.25.
§.§.§ Γ_1≪ T≪Γ_2, weak non-Fermi-liquid on the left and Fermi-liquid on the right CKC:
Let us recall that the Kondo resonances' widths Γ_1, Γ_2 depend on the gate voltages and therefore compete with temperature effects in the vicinity of the Coulomb peaks. With a given temperature this situation occurs when the QD 1 is closer to a Coulomb peak (N_1 is closer to a half integer value) than the QD 2 is. The TP includes two components: weak non-Fermi-liquid and Fermi-liquid characteristics as
S = -256γ/75eπ^2[|r_1|^2ln(E_C,1/T)sin(2π N_1).
.+25π^3/128T/Γ_2|r_2|^2ln(E_C,2/Γ_2)sin(2π N_2)].
The crossover line between two regimes is defined as
E_C,2/T^∗ln(E_C,1/T^∗) = 25π^5/1024γcos^2(π N_2)1/|r_1|^2
×ln(π^2/8γ|r_2|^2cos^2(π N_2)).
The NFL behavior is dominated if T≪ T^* while the FL property
is predicted at the opposite limit T≫ T^*.
§.§.§ Γ_2≪ T≪Γ_1, Fermi-liquid on the left and weak non-Fermi-liquid on the right CKC:
This situation is opposite to the case discussed in the Section <ref>. The regime is achieved
when the QD 2 is closer to a Coulomb blockade peak than the QD 1 is. The TP is characterized by the FL on the left and weak NFL effect on the right CKC as
S = -256γ/75eπ^2[25π^3/128T/Γ_1|r_1|^2ln(E_C,1/Γ_1)sin(2π N_1).
.+|r_2|^2ln(E_C,2/T)sin(2π N_2)].
The crossover line between two regimes is defined as
E_C,1/T^∗∗ln(E_C,2/T^∗∗) = 25π^5/1024γcos^2(π N_1)1/|r_2|^2
×ln(π^2/8γ|r_1|^2cos^2(π N_1)).
The NFL behavior is dominated if T≪ T^** while the FL property
is predicted at the opposite limit T≫ T^**. If the two CKCs are symmetry, T^**=T^*.
§.§.§ T≪(Γ_1, Γ_2), fully Fermi-liquid regime:
When the temperature is decreased to approach the zero value, the TP of the system behaves in accordance with the nonperturbative FL picture:
S = -3πγ T/7e[|r_1|^2/Γ_1ln(E_C,1/Γ_1)sin(2π N_1).
.+|r_2|^2/Γ_2ln(E_C,2/Γ_2)sin(2π N_2)].
The TP is a linear function of the temperature. However, the
pre-factors are giant when both QDs are in the vicinities of the Coulomb peaks. The system has strong FL property.
§.§ Discussion
The investigation of TP for the weak coupling between two CKCs in both cases: 1CK - 2CK and 2CK - 2CK shows the competition between the FL and NFL picture. However, the windows of parameters to observe the FL property are much broader than the windows to access the NFL one. The reason is that the NFL intermediate coupling fixed points of MCK are hyperbolic and therefore unstable.
The results of this work not only cover the perturbative accessible regimes, which have been represented in Ref. <cit.>, but also show a rich property of the TP in different domains of parameters.
Extending the proposal of the weak coupling between two CKCs <cit.> to the regime of almost transparent QPC in the central area of the DCKC, the very recent experiment <cit.> and theory <cit.> have investigated the strong coupling limit. Let us comment on the connection between the weak and strong coupling regimes of the DCKCs. In Ref. <cit.> we have considered the DCKC weakly connecting 1CK-1CK or 1CK-2CK or 2CK-2CK. The
same realization for the strong coupling of two Kondo
simulators has also been theoretically suggested in <cit.> and experimentally realized recently in <cit.> for 1CK-1CK coupling <cit.>. One of the most exciting theoretical predictions of the two-impurity single channel Kondo effect <cit.> is a possibility to map the model under certain assumptions onto the 2CK Hamiltonian.
Interestingly, the Refs. <cit.> showed that at the triple degeneracy point of the DCKC Z_3 symmetry and corresponding local parafermion emerge. It is straightforward to extend the idea <cit.> to MCK-NCK strong coupling (see Fig <ref>). Suppose that there are M>1 identical QPCs in the left hand side of the DCKC and N>1 identical QPCs in the right side of it. The total degeneracy is M+1+N and corresponding emergent local symmetry is Z_M+N+1. There are three important (M,N) realizations accessible through existing experimental setups: i) (2,1) or (1,2) connecting 1CK and 2CK with emergent symmetry Z_4; ii) (2,2) and iii) (3,1) or (1,3) with emergent symmetry Z_5.
Corresponding weak link setups are characterized by the symmetries: i) U(1)× Z_2;
ii) Z_2× Z_2 and iii) U(1)× Z_3. As the weak coupling regimes of ii) and iii) are clearly distinct, being characterized by both different symmetries and different Lorenz ratios (see Ref. <cit.> for more details), it is interesting to examine regimes ii) and iii) in the strong coupling limit. In particular it is important to understand the symmetry of local parafermion emerging in the strong link setup. In addition, switching between different intermediate coupling fixed points results in crossovers between various fractionalized modes manifesting itself in distinctly different regimes of the charge and heat transport.
The weak link regime discussed in this manuscript was analysed using a standard approach based on the transport integrals <cit.>.
The validity of this approach is justified by an assumption that both temperature and voltage drops occur exactly at the central tunnel barrier. As a result, both the left and the right parts of the DCKC are considered at thermal and mechanical equilibrium being characterized by certain temperature T and chemical potential μ. This approach is clearly invalid for the strong link between two sides of the Kondo simulator where both the temperature and the voltage changes continuously across the central QPC. The full-fledged linear response theory of the charge and heat transport across the strong link of the two-site Kondo simulators can be constructed by using Luttinger's pseudo-gravitational approach <cit.> or thermo-mechanical potential <cit.> method in combination with Kubo equations. The theory beyond linear response requires also using Keldysh formalism <cit.> and represents an interesting and important direction for the future investigation.
§ CONCLUSION
In this work, we revisited the thermoelectric transport at the weak link of the DCKC model proposed in the Ref. <cit.>. The Abelian bosonization approach is used for both 1CK and 2CK setup while the refermionization technique is applied in order to solve the 2CK model nonperturbatively. We show the different windows of the parameter set where the TP behaves either the full FL or NFL characteristics or the competition between these properties. The nonperturbative results not only cover the perturbative results but also be applicable in the lower temperature regime T<|r_j|^2E_C,j. We predict that the TP is enhanced in the DCKC in comparison with the single CKC setup. Indeed, a complex charge Kondo circuit which shows the diversity of the competition between the FL and NFL properties, can be a potential thermoelectric material. Moreover, we propose to use the experimental implementation in Ref. <cit.> for investigating the different parafermion contributions to the quantum thermoelectricity when the coupling between QDs is switched from weak to strong.
§ ACKNOWLEDGEMENT
This research in Hanoi is funded by Vietnam Academy of Science and
Technology (program for Physics development) under grant number KHCBVL.06/23-24. The work of M.N.K is conducted within the framework of the Trieste Institute for Theoretical Quantum Technologies (TQT). M.N.K also acknowledges the support from the Alexander von Humboldt Foundation for the research visit to IFW Dresden.
§ APPENDIX
In this Appendix we represent the details of the different approaches at different limits in order to obtain the results shown in the Subsection <ref>.
1, If p_1→0,p_2→0, we have:
lim_p_1→0p_1/(z+u/2)^2+p_1^2 =πδ(z+u/2),
lim_p_2→0p_2/(z-u/2)^2+p_2^2 =πδ(z-u/2).
As a result
lim_p_2→0F_T,m(p_1→0,p_2)/p_2
=lim_p_2→0∫_-∞^∞duπ u^3[u^2+4π^2]/sinh[u][u^2+p_2^2]
=∫_-∞^∞duπ u[u^2+4π^2]/sinh[u]=3π^5/4,
and, finally
lim_p_1→0F_T,s(p_1,p_2→0)/p_1=3π^5/4 .
We obtain the electric conductance as
G^(0) =π^2G_CT^2/6γ^2E_C,1E_C,2,
and the thermoelectric coefficient as
G_T^(1)=-π^4G_CT^2/32eγ E_C,1E_C,2[|r_1|^2ln(E_C,1/T)sin(2π N_1).
.+|r_2|^2ln(E_C,2/T)sin(2π N_2)].
2, If p_1→0,p_2≫1 we have:
F_C(p_1→0,p_2)=∫_-∞^∞duπ p_2u[u^2+4π^2]/sinh[u][u^2+p_2^2],
F_C(p_1→0,p_2≫1)=π/p_2∫_-∞^∞duu[u^2+4π^2]/sinh[u]=9π^5/4p_2,
and
F_T,m(p_1→0,p_2≫1)=π/p_2∫_-∞^∞duu^3[u^2+4π^2]/sinh[u]=3π^7/2p_2.
The calculation of the F_T,s is a bit complicated, which concerns the principal value (PV) as follows.
F_T,s(p_1→0,p_2≫1)/p_1=1/p_2PV∫_-∞^∞dz∫_-∞^∞du
×uz[u^2+4π^2]/(z+u/2)sinh(u/2)[cosh(z)+cosh(u/2)]
=8/p_2∫_-∞^∞dp∫_-∞^∞dq{q[q^2+4π^2]/sinh(q)cosh(p/2)cosh(p/2-q).
.-tanh(p/2)q^2[q^2+4π^2]/pcosh(p/2+q)cosh(p/2-q)} =192π^4/25p_2.
The electric conductance G and the thermoelectric coefficient G_T is computed at the first non-zero term in the nonperturbative treatment are
G^(0)=3G_Cπ^5T^3/32γ^2Γ_2E_C,1E_C,2,
G_T^(1)=-8G_Cπ^3T^3/25eγ E_C,1E_C,2Γ_2[|r_1|^2ln(E_C,1/T)sin(2π N_1).
.+25π^3/128T/Γ_2|r_2|^2ln(E_C,2/Γ_2)sin(2π N_2)].
3, p_1≫1,p_2→0: This limit is opposite to the second limit. The calculation process is the same as the above one.
F_C(p_1≫1,p_2→0)
=π/p_1∫_-∞^∞duu[u^2+4π^2]/sinh[u]=9π^5/4p_1,
F_T,m(p_1≫1,p_2→0)/p_2=1/p_1PV∫_-∞^∞dz∫_-∞^∞du
×uz[u^2+4π^2]/(z-u/2)sinh(u/2)[cosh(z)+cosh(u/2)]=192π^4/25p_1,
F_T,s(p_1≫1,p_2→0) =3π^7/2p_1.
The electric conductance is
G^(0)=3G_Cπ^5T^3/32γ^2E_C,1E_C,2Γ_1,
and the thermoelectric coefficient is
G_T^(1)=-8G_Cπ^3T^3/25eγ E_C,1E_C,2Γ_1
×{25π^3/128T/Γ_1|r_1|^2ln(E_C,1/Γ_1)sin(2π N_1).
+.|r_2|^2ln(E_C,2/T)sin(2π N_2)} .
4, If p_1≫1,p_2≫1, we simply remove the terms which are summed with p_1^2 and p_2^2 in the denominator of the formula (<ref>). We then obtain:
F_C(p_1≫1,p_2≫1)=1/p_1p_2∫_-∞^∞dz∫_-∞^∞du
×u[u^2+4π^2]/sinh[u/2][cosh(z)+cosh(u/2)]=64π^4/5p_1p_2.
F_T,m(p_1≫1,p_2≫1)=1/p_1p_2∫_-∞^∞dz∫_-∞^∞du
×(z-u/2)uz[u^2+4π^2]/sinh[u/2][cosh(z)+cosh(u/2)]=192π^6/35p_1p_2.
F_T,s(p_1≫1,p_2≫1)=1/p_1p_2∫_-∞^∞dz∫_-∞^∞du
×(z+u/2)uz[u^2+4π^2]/sinh[u/2][cosh(z)+cosh(u/2)]=192π^6/35p_1p_2.
The electric conductance and the thermoelectric coefficient in this
limit are
G=8G_Cπ^4T^4/15γ^2E_C,1E_C,2Γ_1Γ_2,
G_T^(1)=-24G_Cπ^5T^5/105eγ E_C,1E_C,2Γ_1Γ_2
×{|r_1|^2/Γ_1ln(E_C,1/T+Γ_1)sin(2π N_1).
+.|r_2|^2/Γ_2ln(E_C,2/T+Γ_2)sin(2π N_2)}.
10
TE_materials G. Snyder and E. Toberer, Nat. Mater. 7, 105 (2008).
Seebeck T. J. Seebeck, Abh. Akad. Wiss. Berlin 1820-21, 289 (1822).
Seebeck1 J. F. Li, W.S. Liu, L.D. Zhao, and M. Zhou, NPG Asia Mater. 2, 152 (2010).
Seebeck2 Y. Du, K. F. Cai, S. Chen, H. Wang, S. Z. Shen, R. Donelson, and T. Lin, Sci. Rep. 5, 6144 (2015).
lowD_TE_materials M. S. Dresselhaus, G. Chen, M. Y. Tang,
R. G. Yang, H. Lee, D. Z. Wang, Z. F. Ren, J. P. Fleurial, P. Gogna,
Adv. Mater. 19, 1043 (2007).
lowD_TE_materials1 G. Chen,M. S. Dresselhaus, G. Dresselhaus, J. P. Fleurial, and T. Caillat, Int. Mater. Rev. 48, 45 (2003).
Blanter Y. M. Blanter and Y. V. Nazarov, Quantum Transport: Introduction to Nanoscience (Cambridge University Press, Cambridge, 2009).
Kisbook K. Kikoin, M. N. Kiselev, and Y. Avishai, Dynamical Symmetry for Nanostructures. Implicit Symmetry in Single-Electron Transport Through Real and Artificial Molecules (Springer, New York, 2012).
staring_93 A. A. M. Staring, L. W. Molenkamp, B. W. Alphenhaar, H. van Houten, O. J. A. Buyk, M. A. A. Mabesoone, C. W. J. Beenakker, and C. T. Foxon, Europhys. Lett. 22, 57 (1993).
Turek_Matveev M. Turek and K. A. Matveev, Phys. Rev. B 65, 115332 (2002).
flensberg K. Flensberg, Phys. Rev. B 48, 11156
(1993).
matveev K. A. Matveev, Phys. Rev. B 51, 1743 (1995).
furusakimatveev A. Furusaki and K. A. Matveev, Phys. Rev.
Lett. 75, 709 (1995).
andreevmatveev A. V. Andreev and K. A. Matveev, Phys. Rev.
Lett. 86, 280 (2001); Phys. Rev. B 66, 045301 (2002).
LeHur1 K. Le Hur, Phys. Rev. B 64, 161302(R) (2001).
LeHur2 K. Le Hur and G. Seelig, Phys. Rev. B 65, 165338 (2002).
Kondo J. Kondo, Prog. Theor. Phys. 32, 37 (1964).
Hewson A. Hewson, The Kondo Problem to Heavy Fermions
(Cambridge University Press, Cambridge, England, 1993).
TW1983 A. M. Tsvelik and P. B. Wiegmann, Adv. in Phys.,
32, 453 (1983).
AFL1983 N. Andrei, K. Furuya, and J. H. Lowenstein, Rev.
Mod. Phys. 55, 331 (1983).
Kondo_review L. Kouwenhoven and L. I. Glazman, Phys. World 14, 33 (2001).
TP_Kondo_exp R. Scheibner, H. Buhmann, D. Reuter, M. N.
Kiselev, and L. W. Molenkamp, Phys. Rev. Lett. 95, 176602
(2005).
thanh2010 T. K. T. Nguyen, M. N. Kiselev, and V. E. Kravtsov,
Phys. Rev. B 82, 113306 (2010).
thanh2015 T. K. T. Nguyen and M. N. Kiselev, Phys. Rev.
B 92, 045125 (2015).
thanh2018 T. K. T. Nguyen, M. N. Kiselev, Phys. Rev. B
97, 085403 (2018).
Anton2022 A. V. Parafilo, T. K. T. Nguyen, and M. N. Kiselev,
Phys. Rev. B 105, L121405 (2022).
Thanh_VN_2 T. K. T. Nguyen and M. N. Kiselev, Commun. Phys. 32, 331 (2022).
Thanh_VN_3 A. V. Parafilo and T. K. T. Nguyen,
Commun. Phys. 33, 1 (2023).
Kis2023 M. N. Kiselev, arXiv: 2304.10872 (2023).
Num1 E. Sela, A. K. Mitchell, and L. Fritz, Phys. Rev. Lett. 106,
147202 (2011).
Num2 A. K. Mitchell, L. A. Landau, L. Fritz, and E. Sela, Phys. Rev. Lett. 116, 157202 (2016).
Num3 L. A. Landau, E. Cornfeld, and E. Sela, Phys. Rev. Lett. 120, 186801 (2018).
Num4 G. A. R. van Dalum, A. K. Mitchell, and L. Fritz, Phys. Rev. B 102, 041111(R) (2020).
Thanh_VN_1 T. K. T. Nguyen and M. N. Kiselev, Commun. Phys. 30, 1 (2020).
thanh2023 T. K. T. Nguyen, A. V. Parafilo, H. Q. Nguyen, and M. N. Kiselev, Phys. Rev. B 107, L201402 (2023).
pierre2 Z. Iftikhar, S. Jezouin, A. Anthore, U. Gennser,
F. D. Parmentier, A. Cavanna and F. Pierre, Nature 526, 233
(2015).
pierre3 Z. Iftikhar, A. Anthore, A. K. Mitchell, F. D.
Parmentier, U. Gennser, A. Ouerghi, A. Cavanna, C. Mora, P. Simon,
and F. Pierre, Science 360, 1315 (2018).
NB1980 Ph. Nozières and A. Blandin, J. Phys. 41, 193 (1980).
Cox1998 D. Cox and A. Zawadowski, Advances in Physics 47, 599 (1998).
AL1993 I. Affleck and A. W. W. Ludwig, Phys.
Rev. B 48, 7297 (1993).
AD1984 N. Andrei and C. Destri, Phys. Rev. Lett. 52,
364 (1984).
FGN_1 M. Fabrizio, A. O. Gogolin, and P. Nozières,
Phys. Rev. Lett. 74, 4503 (1995).
FGN_2 M. Fabrizio, A. O. Gogolin, and P. Nozières,
Phys. Rev. B 51, 16088 (1995).
gogolin A. O. Gogolin, A. A. Nersesyan, and A. M. Tsvelik,
Bosonization Approach to Strongly Correlated Systems (Cambridge University Press, Cambridge, England, 1998).
Toulous_limit V. J. Emery and S. Kivelson, Phys. Rev. B
46, 10812 (1992).
Z3_1 A. B. Zamolodchikov and V. A. Fateev, ZhETF 89,
380 (1985) [Sov. Phys. JETP 62, 215 (1985)].
Z3_2 H. Yi and C. L. Kane, Phys. Rev. B 57, R5579
(1998).
Z3_3 H. Yi, Phys. Rev. B 65, 195101 (2002).
Z3_4 I. Affleck, M. Oshikawa, and H. Saleur, Nucl. Phys.
B 594, 535 (2001).
Z3_5 C. Nayak, S. H. Simon, A. Stern, M. Freedman, and
S. D. Sarma, Rev. Mod. Phys. 80, 1083 (2008).
Z3_6 J. Alicea and P. Fendley, Annu. Rev. Condens. Matter
Phys. 7, 119 (2016).
thanhprl T. K. T. Nguyen and M. N. Kiselev, Phys. Rev. Lett. 125, 026801 (2020).
Gordon2023 W. Pouse, L. Peeters, C. L. Hsueh, U. Gennser,
A. Cavanna, M. A. Kastner, A. K. Mitchell, and D. Goldhaber-Gordon,
Nat. Phys. (2023).
Karki2022 D. B. Karki, E. Boulat, and C. Mora, Phys. Rev. B 105, 245418 (2022).
Z3_DCK D. B. Karki, E. Boulat, W. Pouse, D. Goldhaber-Gordon, A. K. Mitchell, and C. Mora, Phys. Rev. Lett. 130, 146201
(2023).
Aleiner98 I. L. Aleiner and L. I. Glazman, Phys. Rev. B
57, 9608 (1998).
giamarchi T. Giamarchi, Quantum Physics in One
Dimension (Oxford University Press, Oxford, UK, 2003).
misprint There is a missing factor 2π in Eq. (13) of Ref. <cit.>, which consequently requires to put an additional prefactor 2π in all formulas of thermoelectric coefficient G_T and thermopower S thereafter.
com2Historically, the idea of Double-Large-Dot Charge Kondo configuration has been first theoretically proposed in <cit.> for the thermodynamic investigation of the fractional e/2 charge Coulomb Blockade capacitance peaks.
LeHur3 K. Le Hur, Phys. Rev. B 67, 125311 (2003).
Jones_Varma B. A. Jones and C. M. Varma, Phys. Rev. Lett. 58, 843 (1987); B. A. Jones, C. M. Varma, and J. W. Wilkins, ibid. 61, 125 (1988).
Gan_95 J. Gan, Phys. Rev. Lett. 74, 2583 (1995); Phys. Rev. B 51, 8287 (1995).
Logan2012 A. K. Mitchell, E. Sela, and D. E. Logan, Phys. Rev. Lett. 108, 086405 (2012).
lut1 J. M. Luttinger, Phys. Rev. 135, A1505 (1964).
lut2 B. S. Shastry, Rep. Prog. Phys. 72, 016501 (2009).
lut3 F. G. Eich, M. Di Ventra, and G. Vignale
Phys. Rev. Lett. 112, 196401 (2014).
lut4 F. G. Eich, A. Principi, M. Di Ventra, and G. Vignale
Phys. Rev. B 90, 115116 (2014).
|
http://arxiv.org/abs/2306.01529v1
|
20230602132932
|
Constraint-Guided Test Execution Scheduling: An Experience Report at ABB Robotics
|
[
"Arnaud Gotlieb",
"Morten Mossige",
"Helge Spieker"
] |
cs.SE
|
[
"cs.SE"
] |
Constraint-Guided Test Execution Scheduling
Simula Research Laboratory[List of authors is given in alphabetical order], Kristian Augusts gate 23, 0164 Oslo, Norway
{arnaud,helge}@simula.no
ABB Robotics, Bryne, Norway [email protected]
Constraint-Guided Test Execution Scheduling: An Experience Report at ABB Robotics
Arnaud Gotlieb1Morten Mossige2 Helge Spieker1
July 31, 2023
=================================================================================
Automated test execution scheduling is crucial in modern software development environments, where components are frequently updated with changes that impact their integration with hardware systems.
Building test schedules, which focus on the right tests and make optimal use of the available resources, both time and hardware, under consideration of vast requirements on the selection of test cases and their assignment to certain test execution machines, is a complex optimization task.
Manual solutions are time-consuming and often error-prone. Furthermore, when software and hardware components and test scripts are frequently added, removed or updated, static test execution scheduling is no longer feasible and the motivation for automation taking care of dynamic changes grows.
Since 2012, our work has focused on transferring technology based on constraint programming for automating the testing of industrial robotic systems at ABB Robotics. After having successfully transferred constraint satisfaction models dedicated to test case generation, we present the results of a project called whose goal is to automate the scheduling of test execution from a large test repository, on distinct industrial robots. This paper reports on our experience and lessons learned for successfully transferring constraint-based optimization models for test execution scheduling at ABB Robotics.
Our experience underlines the benefits of a close collaboration between industry and academia for both parties.
§ INTRODUCTION
Continuous integration (CI) has been adopted by many companies all around the world in order to ensure better end-user product quality <cit.>. As part of CI, automated testing is crucial to get quicker feedback on the detected defects or regressions of a system under test. When a complete industrial system is tested under CI, a challenge arises if it relies on hardware and software components, because they can hardly be tested in isolation. Besides, additional challenges include the requirement to generate tests with environmental hazards, the combinatorial explosion of the number of potential test cases due to parameter interactions, the automation of test execution scheduling which ensures proper coverage and diversity of test cases and agents.
This paper reports on our experience in deploying a constraint-guided test execution scheduling method as part of a CI process at ABB Robotics. By co-developing an automated testing process named through an industrial-academic partnership, the authors have explored the transfer of advanced constraint programming[Constraint Programming is a declarative programming framework which uses relations among logical variables and search procedures to find solutions of combinatorial problems <cit.>.] models composed of global constraints and rotational diversity <cit.> in a highly automated industrial testing process <cit.>. Since 2012, multiple models for test case generation <cit.> and selection, test prioritization <cit.> and eventually test execution scheduling <cit.> have been explored, evaluated and transferred. Our experience underlines the benefits of a close collaboration between industry and academia for both parties in the area of automated testing.
§ TEST EXECUTION SCHEDULING AT ABB ROBOTICS
ABB Robotics is an industrial robot supplier and manufacturer company operating in more than 50 countries around the world. A key objective of the company is to deliver high-quality products (thus involving an increased focus on testing robots for reliability and performance) for the benefice of its customers. Initially, robot testing was done mostly manually and using human-eyes visual control for checking the results of hand-crafted tests.
This restricted the possible testing time to human-worked hours of test engineers (besides long-running tests, which could use nighttime and weekends) and did not use available robot to its full test capability. To reduce the time-to-market of new products and also improve the quality of these products, the testing process had to be much more automated.
To start with, the test automation process had to be placed within a Continuous Integration (CI) process.
As shown in Fig. <ref>, a typical CI cycle includes software developer commit actions which automatically trigger build, deploy and test activities. The test results are then passed back to the developers to provide them with feedback. Typically, the test activity includes the following five steps:
* Test Case Selection and Generation: Tests are either extracted from an existing repository or automatically generated from specific requirements;
* Test Suite Reduction: Test suites that achieve a given objective (e.g., full requirement coverage) are pruned to eliminate spurious test cases;
* Test Case Prioritization: Tests are ordered to provide a quick feedback by using either pre-determined or dynamically-computed priority values;
* Test Execution Scheduling: Test plans are distributed on different robots, in a specific order according to a pre-computed test schedule;
* Test Execution: Tests are then eventually executed according to the specified schedule, in order to identify defects in the system under test. This activity is clearly the most demanding as it requires launching the system with the test cases selected and prioritized in the previous steps.
It is worth noting that, in CI, controlling the test preparation time (i.e., the four first steps) with respect to the test execution time (i.e., the fifth step) is crucial. Knowing that the overall time-line allocated to test activities has to be bounded, we have to keep as much time as possible for test execution. Of course, an optimized test schedule (computed during test preparation) can lead to better test execution, but it makes no sense to spend too much time in the computation of a schedule if it reduces too much the time available for test execution. As shown in Fig. <ref>, finding the right trade-off is part of the testing challenges faced at ABB.
In ABB's context, a test case aims at verifying a robotized task, which is performed by a robot under the observation of some sensors.
A test can either fail or succeed; it fails when observations reveal a misfunction, and it succeeds when no misfunction is observed.
A test case is associated with some metadata, consisting of its average duration, previous execution times, results, and targeted robots.
Each test case execution is non-preemptive, that is, it cannot be interrupted by another test or transferred to another robot during execution.
All test cases are independent, without any dependency on the order in which they are executed.
Still, they can be ordered by using their static priority, which is decided by the test engineers, and their dynamic priority, which is based on a combination of their effectiveness to reveal defects in earlier CI cycles and the time since their last execution.
Test cases have furthermore hardware requirements, meaning that they can only be executed on certain robots.
Test cases are executed by test agents, which are software components that capture the various schedules computed for each CI cycle. Each test agent has a limited amount of time available per cycle and a set of compatible test cases, which it can execute.
Computing a test schedule requires to vary the assignment between test cases and test agents between cycles to achieve a full coverage of all possible combinations between tests and hardware over time.
Fulfilling this objective balances the confidence in the stability of certain features on different hardware, while giving room for executing many test cases and not executing the same tests multiple times during a cycle.
§ AUTOMATED TESTING PROCESS
Here, we present the approach to automate the testing process within the CI environment.
The testing process is sequential with distributed components, orchestrated by a central test controller. Starting with data initialization and acquisition (Sec. <ref>), the process computes the priorities over test cases (Sec. <ref>) and the test schedules (Sec. <ref>). Test execution is performed by distributing test plans to each robot (Sec. <ref>) and eventually test execution reporting takes place (Sec. <ref>).
The central test controller, referred to as , manages the process, acquires and distributes the necessary data from other sources, and provides the interface towards the automated testing process. Other components include a module for test case prioritization, selection, and scheduling, and a module for controlling test agents executing the test cases.
§.§ Data Initialization and Acquisition
The process is set up with available test cases and agents. Some test cases and agents are filtered out to exclude scripts and robots under maintenance or having incompatible hardware requirements.
For the remaining test cases, historical meta-data is extracted from the central data repository. This data includes the most recent test execution results, their runtime, and previous test agents they were executed on, etc.
§.§ Test Case Prioritization
The prioritization step is initially designed with a simple approach, to ease the setup of automation and definition of assigned priorities during integration by the test engineers.
The process iterates through all executable test cases and assigns each a priority, which is a weighted sum of the time since the last run, the test case duration, and the most recent last results
The weights and number of considered historical test results are manually chosen during integration, but that process could be replaced by a self-adaptive method in the future.
§.§ Selection and Scheduling
Selection and scheduling focus on taking the test cases with the highest priorities and distributing them to the test agents until all time available for testing is used. Although test case selection and scheduling are often regarded as two separate tasks in the literature, in practice, we closely integrate both steps. Selection means to take those test cases from the set of prioritized test cases, which are most desirable to execute. Because the execution of test cases is constrained, this selection has to consider which subset of test cases can actually be executed and at the same time maximizes the available resources (as we want to avoid idle times). The selection and scheduling step receives a set of prioritized test cases and a set of available test agents as inputs. During this step, creates an execution schedule, where each test case is assigned to one test agent for execution while preferring to assign high-priority test cases over low-priority test cases. During selection, test cases which are marked as obligatory to be run, are always included in the final schedule, regardless of their calculated priority.
We now approach this scheduling task by using Constraint Programming (CP), even if, in its initial version,
only a simple greedy first-fill algorithm was used. This heuristic algorithm's first ordered the test cases by descending priority. Then, successively for each test agent, the test case with the highest priority was assigned to the test agent until the maximum time limit was reached. However, we quickly discovered that this too-simplistic approach was not suitable to ensure sufficient diversity in the selection of test cases and agents. We then developed a refined model based on CP.
CP is a paradigm in which a problem is not modelled as a sequence of steps to achieve a desired solution, i.e., an algorithm, but relations between variables are described to formulate properties of a desired solution (see <cit.>).
CP and its associated optimization methods are efficient and well-performing techniques for modelling strictly constrained problems, such as planning and scheduling problems <cit.>.
Using CP for scheduling enables precise control over the execution time and the trade-offs made between time looking for a solution and the solution's quality.
We replaced the initial scheduling method with a dedicated constraint optimization model which further optimizes the schedules by ensuring that the assignment between test cases and test agents changes between test cycles. We called this process rotational diversity and used global constraints to develop it. Full details on this constraint model are available in <cit.>.
§.§ Distribution and Execution
Once the test schedule is created, transforms it into separate, individual test plans and sends them to the corresponding test agent. An example test plan is shown in Fig. <ref>. Each test agent executes all assigned test cases independently, as there is no interdependency between test cases and robots.
The test agent records all test results and log files from the test cases and returns them to the test controller.
§.§ Reporting
Reporting aims to communicate the test execution results back to the developer for failure analysis. An example of a test report is shown in Fig. <ref>. The report summarizes the results of a test cycle, allows us to navigate into lower levels of the test hierarchy and access specific details of single test executions. This hierarchical structure makes the report accessible to different user groups and is the first step of debugging and failure analysis. Another goal of the reporting step is to gather and visualize information about the testing process itself. A visual report of the scheduling outcome is created as a means for individual run analysis and communication (see Fig. <ref>). It is built on web technologies and enables interactive exploration, including access to test case information and results of recent executions.
Exhaustive reporting and data collection enables better long-term evaluation of the system's behavior as well as impact evaluation onto software development, which is an important aspect for tuning the process in the future.
Besides the reporting of individual test results, monitoring the overall behaviour of is performed.
Fig. <ref> shows examples of two such monitoring metrics. The resource utilization monitors how efficiently the available resources are filled by the test case scheduling algorithm, here most plans should show a high utilization of close to 100% to make the best use of the available resources. The distribution of test case priorities shows the variation in relevance of test cases. Here, there is a large block of highly important test cases with high priority but also chunks with low priority as well as average priority, indicating a good overall balance of priorities.
§ EMPIRICAL EVALUATION
After a development phase where the integration of all steps of the automated testing process was realized, we performed a one-month empirical evaluation of an existing subsystem called IPS (Integrated Painting Systems).
Even though an exhaustive quantitative evaluation of the testing process is difficult as it substantially impacts the working processes, we drew some conclusions on the process by examining the schedules created by .
For the evaluation, we considered 87 CI cycles of .
As stated above, Fig. <ref> reports on the resources utilization and test case priority of the test schedules of . Each schedule achieves a resource utilization of at least 91 % with the majority having a utilization of 99 % meaning that the available time for testing is used extensively.
An overall utilization of 100 % is not achievable for two reasons.
First, the total duration of test case execution is not guaranteed to sum up to the total available time.
Second, during scheduling, the focus is on assigning highly prioritized test cases and then filling in the available time with the most important test cases instead of maximizing the time usage.
Regarding test case priority, Fig. <ref> shows that the test cases are spread among the spectrum of possible priorities, with two noticeable clusters at the lower and upper bound of the spectrum.
Having a similar number of high- and low-priority test cases stems from the fact, that high-priority test cases, once they have passed their last execution, tend to receive a low priority during the next cycle. This behavior distinguishes from test cases which have not failed during the observed period. After having not been executed for a while, the priority grows again and these test cases become likelier to be executed again.
§ LESSONS LEARNED
We report on three lessons learned while developing test automation process.
Automated test scheduling through CI is crucial to improve robot software/hardware quality. Automated testing through CI allows us to detect at an early stage hardware/software defects on robots and avoid the propagation of failure at customer sites. It also reveals regression issues when the specification of a new product is not yet finalized. This approach significantly improves the overall product quality;
Incremental co-development is relevant when complex constraint optimization models have to be developed. We co-developed a test execution scheduling component as part of .
Starting from a simple version (based on an inefficient greedy-based scheduling approach), we developed a complex constraint optimization model based on global constraints and rotational diversity incrementally.
This approach was key to fostering the adoption and maintenance of this complex model by people who do not necessarily have the expertise to maintain advanced constraint models;
Industry-academic co-development. The outcomes of this co-development were beneficial for both sides.
On one hand, ABB Robotics benefited from the academic expertise in constraint-based scheduling, which was required to develop test execution scheduling models.
On the other hand, scientists took advantage of the industrial experience of the test engineers in the test automation processes, to publish advanced research results with empirical results.
Finally, thanks to this co-development, the transferability of the method was easier.
§ CONCLUSION
This paper reports on an experience to transfer constraint-based models for automated test execution scheduling at ABB Robotics.
In this work, advanced constraint-based scheduling models using global constraints and rotational diversity were developed and empirically evaluated, and industrialized as part of a complete CI process.
Further work includes refinement in the description of test cases to handle specific globally-shared external equipment.
splncs04
|
http://arxiv.org/abs/2306.04353v1
|
20230607113546
|
Reversible Numeric Composite Key (RNCK)
|
[
"Nicola Asuni"
] |
cs.DB
|
[
"cs.DB",
"E.2; H.3.1"
] |
Quasi-Newton Detection in One-Bit Pseudo-Randomly Quantized Wideband Massive MIMO Systems
Gökhan Yılmaz, Graduate Student Member, IEEE, and
Ali Özgür Yılmaz, Member, IEEE
The work of G. Yılmaz is supported by Vodafone Turkey within the framework of the 5G and Beyond Joint Graduate Support Program coordinated by the Information and Communication Technologies Authority of Turkey.
The authors are with the Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey. (E-mail: {yilmaz.gokhan_01, aoyilmaz}@metu.edu.tr)
July 31, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In database design, Composite Keys are used to uniquely identify records and prevent data duplication. However, they require more memory and storage space than single keys, and can make queries more CPU-intensive. Surrogate Keys are an alternative that can overcome some of these limitations, but they can also introduce new disadvantages.
To address these challenges, a new type of key called a Reversible Numeric Composite Key (RNCK) has been developed. RNCK is a single number that encodes multiple data attributes, and can be decoded back to the original values. This makes it possible to achieve the benefits of both Composite Keys and Surrogate Keys, while overcoming some of their limitations.
RNCK has been shown to improve query performance and reduce memory and storage requirements. It can be used in relational databases, large static datasets, and key-value caching systems. RNCK has been successfully used in production systems for several years.
§ INTRODUCTION
In data modeling, or database design, a Composite Key is a unique identifier made up of two or more attributes (database table columns). For example, a book record might be identified by its ISBN code, title, and author. These attributes cannot be used individually to identify a book. The ISBN code alone is not a unique identifier, because there could be two or more books with the same code. However, no two books will have the same combination of these attributes.
Composite Keys <cit.> generally help maintain data integrity and prevent data duplication. Compared to Surrogate Keys <cit.>, which are artificial keys that are not based on real-world data, Composite Keys are easy to implement because they can be created using existing attributes that are often Natural Keys <cit.>. However, when a Composite Key is referenced in multiple data sets (database tables) as a Foreign Key <cit.>, it uses more memory or storage space, as multiple attributes (columns) are required instead of just possibly one. This leads to a more complex schema. Queries become CPU-intensive, as every search and join requires comparing multiple attributes instead of just one in the case of a single key.
Surrogate Keys can overcome some of the Composite Key limitations at the expense of introducing other disadvantages. Surrogate Keys do not provide any information about the represented data, making it difficult to understand and interpret the data. They must be generated and maintained separately from the data they represent, in what can be an error-prone process. They may require more storage space for the creation of additional natural attribute indexes in order to avoid duplications and full table scans when fulfilling likely queries.
The performance of queries, including memory consumption, is also affected by the type of each attribute.
Comparing integer numerical types is a very well-optimized operation in current computer architectures. Most current CPUs can perform multiple uint64 comparisons at the same time using SIMD (Single Instruction, Multiple Data) instructions. Generally, a x86 CPU with 16 computing cores can perform a theoretical maximum of 32 uint64 comparisons per clock cycle. With a 5.0 GHz CPU, this is a theoretical maximum of 160 billion comparisons per second. The actual number of uint64 comparisons that can be performed per second will depend on other factors, including the specific application and the number of other tasks that the CPU is running.
In contrast with the simplicity and high-performance of numerical comparisons, string comparisons are a slow operation, especially for large strings. This is because strings are typically stored as arrays of characters and each character must be compared individually. This is assuming the best scenario where each string has already been normalized to a common canonical form. This is not necessarily common, more complex and expensive comparisons can be required. Alternatively, string comparisons can be performed more efficiently by using a hash table, but this can introduce some disadvantages like space complexity, collisions, load factor, and hash function performance <cit.>.
To combine the advantages of Composite Keys (CKs) and Surrogate Keys (SKs) while overcoming some of their limitations, a Reversible Numeric Composite Key (RNCK) is presented here.
RNCK can be used only in certain cases, when the total number and maximum size of CK attributes is relatively small. RNCK encodes one or more attributes into a number, such that it is possible to directly and efficiently decode the original attributes while preserving some attribute sorting and searching properties. In general, compared to CKs and SKs, the use of RNCK allows to increase query performance while reducing memory and storage requirements.
In addition to relational databases, RNCK can be a very effective index in large static datasets and key-value caching systems.
§ DEFINITION
A Reversible Numeric Composite Key (RNCK) is a single number that uniquely represents a Composite Key (CK) or a single non-numeric Natural Key.
The RNCK number is generated from one or more data attributes using an encoding function. The code that is generated can be directly reversed to the original attributes using a decoding function. The encoding and decoding functions are bijective.
The RNCK format is designed to make encoding and decoding operations fast and inexpensive.
§ NORMALIZATION
Before encoding, a normalization step may be required to ensure a consistent and unambiguous representation of the CK attributes. For example, Unicode strings should be normalized to a canonical form.
Small in-memory lookup tables can be used by the encoding and decoding functions to efficiently enumerate limited attribute sets.
§ FORMAT
In performance-focused applications, RNCK is typically a 64-bit unsigned integer (uint64), as this is natively supported by most current hardware platforms. Other data types compatible with binary operations can also be used, such as uint32 or uint128, if they are available.
A RNCK follows a binary pattern, where distinct sections of the binary number (groups of bits) represent different attributes or combinations of them. The binary sections are organized from the Most Significant Bit (MSB) to the Least Significant Bit (LSB) in the order of the sorting priority of each attribute. This allows sorting by RNCK to be equivalent to sorting by the attributes in order.
The number of bits required for each section is calculated from the maximum number of distinct possible values of the corresponding attribute. For example, for an attribute with a maximum of 100 distinct values (including the null value), a binary section of at least ⌈ log_2(100) ⌉=7 bit is required. This is because 2^7=128, which is greater than or equal to the maximum number of possible values (100).
§ APPLICABILITY AND LIMITATIONS
RNCK can only be used instead of CK in certain cases, such as when the total number of attributes and the maximum size of each attribute is relatively small.
The use of RNCK is only possible if the underlying data type is large enough to contain the encoding of all CK attributes. Each binary section must be large enough to store all possible values of the corresponding attribute. In some cases, additional flag bits may be required to indicate special cases.
§ PARTIALLY REVERSIBLE ENCODING
To overcome the RNCK capacity limitation, it is sometimes possible to adopt multiple encoding schemas that are indicated by bit flags.
For example, in VariantKey <cit.>, some input variants may exceed the REF+ALT binary section capacity. This is true for only about 0.4% of the records in the reference dataset. In these rare cases, the least significant bit (LSB) is set to 1 and the remaining 30 bits are filled with a hash value that is used as a key for a relatively small lookup table. This alternate encoding is a good compromise because it is rarely used and still preserves some of the RNCK properties, such as the ability to sort and search the variants by chromosome (first binary section) and position (second binary section).
§ PROPERTIES
* Each RNCK code is unique for a given set of CK attributes.
* RNCK can be quickly encoded and decoded.
* Comparing two CK values by RNCK only requires comparing two numbers, which is a very well-optimized operation in current computer architectures.
* A RNCK can be represented as a fixed-length hexadecimal string. This is useful for compatibility with text-based data representation formats or as an interchange format.
* Sorting the fixed-length hexadecimal representation of RNCK in alphabetical order is equivalent to sorting the RNCK numerically.
* Sorting by RNCK is equivalent to sorting by the CK attributes in enumeration order.
* RNCK can be used to replace CK and SK in a database to simplify common searching, merging, and filtering operations.
* All types of database joins between two datasets (inner, left, right, and full) can be easily performed using RNCK as a single index.
* RNCK can reduce data storage, memory usage, and improve performance.
* RNCK can be used with existing key-value systems, including in-memory caches, where the key is RNCK.
* RNCK can be used with columnar data formats (e.g. Apache Arrow <cit.>, Apache Parquet <cit.>) to perform really fast binary searches.
* In some cases, RNCK can be used to speed up CK searches in a given range. See the VariantKey Overlapping Regions <cit.> for more information.
§ EXAMPLES
Practical implementations of Reversible Numeric Composite Key (RNCK) in multiple programming languages have been successfully used in production systems for some time.
§.§ VariantKey
VariantKey is a RNCK for Human Genetic Variants <cit.>.
A reference implementation of VariantKey in multiple programming languages can be found at <cit.>: https://github.com/tecnickcom/variantkeyhttps://github.com/tecnickcom/variantkey
The VariantKey is composed of 3 sections arranged in 64 bit:
[]
0 4 5 32 33 63
| | | | | |
01234 567 89012345 67890123 45678901 2 3456789 01234567 89012345 67890123
5 bit CHROM >| |< 28 bit POS >| |< 31 bit REF+ALT >|
Encoding example:
[]
| CHROM | POS | REF | ALT |
——————-+——-+——————————+—–+——————————-+
Raw variant | chr19 | 29238770 | TC | TG |
Normalized variant | 19 | 29238771 | C | G |
——————-+——-+——————————+—–+——————————-+
VariantKey bin | 10011 | 0001101111100010010111110011 | 0001 0001 01 10 0000000000000000000 |
——————-+——-+——————————+————————————-+
VariantKey hex | 98DF12F988B00000 |
VariantKey dec | 11015544076520914944 |
——————-+—————————————————————————-+
§.§ NumKey
NumKey is a RNCK for Short Codes or E.164 LVN.
A reference implementation of NumKey in multiple programming languages can be found at <cit.>: https://github.com/tecnickcom/numkeyhttps://github.com/tecnickcom/numkey
The NumKey is composed of 3 sections arranged in 64 bit:
[]
0 4 5 59 60 63
| | | | | |
01234 567 89012345 67890123 45678901 2 3456789 01234567 89012345 6789 0123
5 bit COUNTRY >| |< 50 bit NUMBER >| |< 4 bit LENGHT
Encoding example:
[]
| COUNTRY | NUMBER | NUM |
| [ISO 3166] | [E.164] | LEN |
—————+——+—–+————————————————-+—–+
Number | I T | 123456 | 6 |
—————+—- -+—–+————————————————-+—–+
NumKey bin | 10011 01000 0000000000000000000000000000000011110001001000000 0110 |
—————+——————————————————————–+
NumKey hex | 4D000000001E2406 |
NumKey dec | 5548434740922426374 |
—————+—+—————————————————————-+
§ CONCLUSIONS
Reversible Numeric Composite Key (RNCK) is a new type of data key that combines the advantages of both Composite Keys and Surrogate Keys. It overcomes their limitations by being reversible, meaning that the original values can be decoded from the RNCK. RNCK also allows preserving some attribute sort and search properties.
RNCK can only be used in certain cases, such as when the total number and maximum size of key attributes is relatively small. In these cases, RNCK can be applied very effectively to relational databases, static datasets, and key-value caching systems.
Adopting RNCK can help reduce memory and storage requirements, while increasing query performance. RNCK has already been successfully used in production systems for several years.
unsrtnat
|
http://arxiv.org/abs/2306.02554v3
|
20230605030259
|
The Voronoi Summation Formula for $\mathrm{GL}_n$ and the Godement-Jacquet Kernels
|
[
"Dihua Jiang",
"Zhaolin Li"
] |
math.NT
|
[
"math.NT"
] |
equationsection
keepeqno
num
()equation
plain
headings
α
ac
β
\
δ
Δ
diag
ϵ
ϵ
t
ε
sym
σ
rad
⋃-7.5pt·
∪-5pt·
π
↪
Nil
χ
B
π
g
Σ
σ
T
P
t
τ
J
h
η
θ
φ
τ
ξ
Spec
ζ
std
ω
Ω
λ
Λ
λ
Σ
γ
Γ
mult
ϖ
^LG
^LT
^L(n)
φ
meas
'
rss
geom
scusp
pv
Rep
thmTheorem[section]
dfn[thm]Definition
rmk[thm]Remark
exm[thm]Example
prp[thm]Proposition
lem[thm]Lemma
cor[thm]Corollary
ass[thm]Assumption
cnj[thm]Conjecture
que[thm]Question
⌈⌉
⌊⌋
Voronoi Formula and Godement-Jacquet Kernels]The Voronoi Summation Formula for _n and the Godement-Jacquet Kernels
School of Mathematics, University of Minnesota, 206 Church St. S.E., Minneapolis, MN 55455, USA.
[email protected]
[email protected]
[2010]Primary 11F66, 22E50, 43A32; Secondary 11F70, 22E53, 44A20
The research of this paper is supported in part by the NSF Grant DMS-2200890.
In this paper, we first give a new proof of the Voronoi summation formula for _n over a number field (<cit.>) by means of the π-Poisson summation formula
on _1 (<cit.>) for any irreducible cuspidal automorphic representation π of _n. The duality on both sides of the Voronoi formula is related by the
π-Fourier transform. Then we introduce the notion of the Godement-Jacquet kernels H_π,s and their dual kernels K_π,s for any irreducible cuspidal automorphic representation π of _n and show in Theorems <ref> and <ref> that H_π,s and K_π,1-s are related by the nonlinear π_∞-Fourier transform if and only if s∈ is a zero of L_f(s,π_f)=0, the finite part of the standard automorphic L-function L(s,π), which are the (_n,π)-versions of <cit.> that is for the Tate kernel with n=1 and π the trivial character.
[
Dihua Jiang and Zhaolin Li
July 31, 2023
==============================
§ INTRODUCTION
The Godement-Jacquet theory (<cit.>) of the standard L-functions L(s,π) of _n was reformulated in <cit.> as an extension of the Tate thesis that uses the harmonic analysis on _1 to establish the Hecke theory for L(s,π) when π is an irreducible cuspidal automorphic representation of _n. This reformulation has
nice applications:
* The local theory of such a reformulation as developed in <cit.> proves that the (nonlinear) Fourier transforms that are responsible to the local functional
equation is given by a convolution operator with an explicitly defined kernel function k_π_ν,ψ_ν on _1, which will be related to a Bessel function
in Section <ref> of this paper; and that all the Langlands gamma factors take the form of I. Gelfand, M. Graeve, and I. Piatetski-Shapiro in <cit.> and of A. Weil in <cit.>.
* The global theory of such a reformation as developed in <cit.> gives the adelic formulation of A. Connes' theorem (<cit.>) and the complete version of C. Soulé's theorem (<cit.>) that provides spectral interpretation of the zeros of L(s,π).
* In Section <ref> of this paper, the local and global theory of such a reformulation in <cit.> provides a new (Poisson summation formula) proof of the Voronoi formula for any irreducible cuspidal automorphic representation π of _n, which was previously proved by S. Miller and W. Schmid in <cit.> and by A. Ichino and N. Templier in <cit.> by using the Rankin-Selberg convolution of H. Jacquet, I. Piatetski-Shapiro and J. Shalika in <cit.>.
* In Section <ref> of this paper, the local and global theory of such a reformulation in <cit.> defines the Godement-Jacquet kernels for L(s,π) and proves in Theorems <ref> and <ref> the (_n,π)-versions of
Clozel's theorem (<cit.>) for any irreducible cuspidal automorphic representation π of _n, which was proved in <cit.> for the Tate kernels
associated with the Dedekind zeta function ζ_k(s) for any number field k.
§.§ Voronoi summation formula
The classical Voronoi summation formula and its recent extension to the _n-version have been one of the most powerful tools in number theory and relevant areas in analysis.
We refer to an enlightening survey paper by S. Miller and W. Schmid (<cit.>) for a detailed account of the current state of the art of the Voronoi summation formula and its applications to important problems in number theory.
The Voronoi summation formula for _n was first studied by S. Miller and W. Schmid in <cit.> for n=3 and in <cit.> for general n. They use two approaches. One is based on classical harmonic analysis that has been developed in their earlier paper (<cit.>), and the other is based on the adelic version of the Rankin-Selberg convolutions for _n×_1, which was developed by H. Jacquet, I. Piatetski-Shapiro and J. Shalika in <cit.> and by J. Cogdell and Piatetski-Shapiro in <cit.>
(and also by Jacquet in <cit.> for the Archimedean local theory). The classical approach to the Voronoi formula for _n has also been discussed in <cit.> and <cit.>. A complete treatment of the adelic approach to the Voronoi formula for _n over a general number field was given by A. Ichino and N. Templier in <cit.>.
We recall their general Voromoi formula for _n below.
Let k be a number field and the ring of adeles of k.
For each
irreducible cuspidal automorphic representation π of _n(), the Voronoi summation formula is an identity of two summations. One side
of the identity is given by certain data associated with π and the other side is given by certain corresponding data associated with π, the contragredient of π.
Let ψ=⊗_νψ_ν be a non-trivial additive character on /k.
At each local place ν of k, to a smooth compactly supported function w_ν(x)∈_c^∞(k_ν^×) is associated a dual function w_ν(x) such that the following functional equation
∫_k_ν^×w_ν(y)χ_ν(y)^-1|y|_ν^s-n-1/2^× y=γ(1-s,π_ν×χ_ν,ψ_ν)∫_k_ν^×w_ν(y)χ_ν(y)|y|_ν^1-s-n-1/2^× y
holds for all s∈ and all unitary characters χ_ν of k_ν^×.
Since any irreducible cuspidal automorphic representation π of _n() is generic, i.e. it has a non-zero Whittaker-Fourier coefficient. If we write π=⊗_ν∈|k|π_ν, where |k| denotes the set of all local places of k, then at any local place ν, the local component π_ν is an irreducible admissible and generic representation of _n(k_ν).
Let (π_ν,ψ_ν) be the local Whittaker model of π_ν, and W_ν(g) be any Whittaker function on _n(k_ν) that belong to (π_ν,ψ_ν)
(see Section <ref> for the details).
Let S be a finite set of |k| including all Archimedean places and the local places ν where π_ν or ψ_ν is ramified.
As usual, we write
=_S×^S
where _S=∏_ν∈ Sk_ν, which is naturally embedded as a subring of , and ^S the subring of adeles with trivial component above S.
At ν∉ S, we take the unramified Whittaker vector ^∘ W_ν of π_ν, which is so normalized that
^∘ W_ν(_n)=1. Denote by ^∘ W^S:=∏_ν∉ S^∘ W_ν, which is the normalized unramified Whittaker function of
π^S=⊗_ν∉ Sπ_ν. Similarly, we define ^∘W^S=∏_ν∉ S^∘W_ν to be the (normalized) unramified Whittaker function of π^S=⊗_ν∉ Sπ_ν. We recall that the functions ^∘ W^S and ^∘W^S are related by the following
^∘W^S(g)=^∘ W^S(w_n^tg^-1)
for all g∈_n(^S), where w_n is the longest Weyl element of _n=_n as defined in (<ref>).
The following is the Voronoi formula proved in <cit.>. The unexplained notation will be defined in Sections <ref> and <ref>.
For ζ∈^S, let R=R_ζ be the set of places ν such that |ζ_ν|>1. At each ν∈ S let w_ν∈_c^∞(k_ν^×). Then:
∑_α∈ k^×ψ(αζ)·^∘ W^S( [ α ; _n-1 ])w_S(α)=∑_α∈ k^×_R(α,ζ,^∘W_R)·^∘W^R∪ S( [ α ; _n-1 ])w_S(α),
where w_S(α):=∏_ν∈ Sw_ν(α) and the same for w_S(α), and _R(α,ζ,^∘W_R) is
a finite Euler product of the local Kloosterman integrals:
_R(α,ζ,^∘W_R):=∏_ν∈ R_ν(α,ζ_ν,^∘W_ν).
For the place ν, the local Kloosterman integral _ν(α,ζ_ν,^∘W_ν) is defined by
_ν(α,ζ_ν,^∘W_ν)
:=|ζ_ν|_ν^n-2∫_U^-_τ(F_ν)ψ_ν(u_n-2,n-1)^∘W_ν(τ u) u,
where
U^-_τ={[ _n-2 * 0; 0 1 0; 0 0 1 ]} and τ=[ 0 1 0; _n-2 0 0; 0 0 1 ][ _n-2 0 0; 0 -αζ_ν^-1 0; 0 0 -ζ_ν ],
as given in <cit.>.
The proof of Theorem <ref> in <cit.> is based on the local and global theory of the Rankin-Selberg convolution for _n×_1 (<cit.>).
It is important to mention that Theorem <ref> and its proof has be extended by A. Corbett to cover an even more general situation with more applications in
analytic number theory (<cit.>).
From the historical development of the Voronoi summation formula, one expects that there should be a proof of the Voronoi formula via a certain kind of Poisson summation formula.
In other words, the two sides of the Voronoi formula should be related by a certain kind of Fourier transform and the identity should be deduced from the corresponding Poisson
summation formula.
In the current proof of Theorem <ref>, such important ingredients from the harmonic analysis were missing, although
there were discussions in <cit.> and <cit.> on the local Bessel transform with the kernels deduced from the local functional equation in the local theory of
the Rankin-Selberg convolution in <cit.> and <cit.>, and the identity was deduced from explicit computations from the global zeta integrals of the Rankin-Selberg convolution (<cit.> and <cit.>). Over the Archimedean local fields, Z. Qi has developed in <cit.> a theory of fundamental Bessel functions of high rank and
formulated those Bessel transforms in the framework of general Hankel transforms that are integral transforms with Bessel functions as the kernel functions.
In order to carry out such an expectation, we use the _1-harmonic analysis: π-Schwartz space, π-Fourier transform, and the associated π-Poisson summation formula
for any irreducible cuspidal automorphic representation π of _n(), as developed in <cit.>, to give a new proof of the Voronoi formula for _n.
We note that the π-Poisson summation formula on _1 in <cit.> (Theorem <ref>) relies heavily on the work of R. Godement and H. Jacquet (<cit.>). Hence our proof of Theorem <ref> is in principle based on the local and global theory of the Godement-Jacquet integrals for the standard L-functions of _n×_1.
Our proof indicates that the Voronoi formula in Theorem <ref> is a special case of the π-Poisson summation formula on _1 in
<cit.> (Theorem <ref>) with a particular choice of inputting functions. We refer to Remark <ref> for consideration of the extended
Voronoi formula (<cit.>) as a special case of the π-Poisson summation formula on _1 in <cit.> (Theorem <ref>).
In this sense, the π-Poisson summation formula on _1 for any irreducible cuspidal automorphic representation π of _n() established in <cit.> can be regarded as the most general (or universal) identity of the Voronoi type.
The most technical part of our proof is to specialize the local theory of harmonic analysis on _1 associated to any irreducible smooth representation of _n(k_ν),
as developed in <cit.>, to the situation of the Voronoi formula as indicated in Theorem <ref>.
For any irreducible smooth representation π_ν of _n(k_ν), which is of Casselman-Wallach type if ν is an infinite local place of k,
we define the π_ν-Bessel function _π_ν,ψ_ν(x) on k_ν^× (Definitions <ref>, <ref> and <ref>) and obtain a series of
results on the relations between the π_ν-Bessel functions _π_ν,ψ_ν(x), the π_ν-Fourier transforms _π_ν,ψ_ν and the π_ν-kernel functions
k_π_ν,ψ_ν(x) as introduced and studied in <cit.> (see (<ref>) and (<ref>) for details), and on new formulas for the dual functions w_ν(x) of w_ν(x)∈^∞_c(k_ν^×). We summarize
those results as the following theorem.
For any local place ν of the number field k, let π_ν be an irreducible smooth representation π_ν of _n(k_ν), which is of Casselman-Wallach type if ν is infinite. For any w_ν(x)∈^∞_c(k_ν^×), w_ν(x) is the dual function of w_ν(x) as (<ref>) or in Theorem <ref>. Then the following hold.
(1) The π_ν-Fourier transform _π_ν,ψ_ν realizes the duality between w_ν(x) and w_ν(x), up to normalization,
_π_ν,ψ_ν(w_ν(·)|·|_ν^1-n/2)(x)=w_ν(x)|x|_ν^1-n/2
for any x∈ k_ν^×.
(2) The dual function w(x) of w_ν(x) enjoys the following formula:
w_ν(x)=|x|_ν^n/2-1(k_π_ν,ψ_ν(·)*(w_ν^∨(·)|·|_ν^n/2-1))(x),
for any x∈ k_ν^×, where k_π_ν,ψ_ν(x) is the π_ν-kernel function of π_ν as in (<ref>) and w_ν^∨(x)=w_ν(x^-1).
(3) As distributions on k_ν^×, the π_ν-kernel function k_π_ν,ψ_ν as in (<ref>) and the π_ν-Bessel function are related by the following identity:
k_π_ν,ψ_ν(x)=_π_ν,ψ_ν(x)|x|_ν^1/2
for any x∈ k_ν^×.
(4) The dual function w_ν(x) of w_ν(x) enjoys the following formula:
w_ν(x)=|x|_ν^n-1/2(_π_ν,ψ_ν(·)*(w_ν^∨(·)|·|_ν^n-3/2))(x),
for all x∈ k_ν^×.
The proof of Theorem <ref> is given in Sections <ref> and <ref>. More precisely, Part (1) of Theorem <ref> is Proposition <ref>.
Part (2) is Corollary <ref>. Part (3) is a combination of Propositions <ref>, <ref>, and <ref>. Part (4) is an easy consequence of Parts (2) and (3), which is Corollary <ref>.
It is important to point out that the π_ν-Bessel functions _π_ν,ψ_ν(x) in the p-adic case is defined by means of the Whittaker model of π_ν following the general framework of
E. Baruch in <cit.>. Hence we have to assume in the p-adic case that π_ν is generic in the definition of the π_ν-Bessel functions _π_ν,ψ_ν(x). However, in the real or
complex case, we follow the general theory of Z. Qi in <cit.> on Bessel functions of high rank, which works for general irreducible smooth representations of _n of Casselman-Wallach type. Hence
in the real or complex case, the definition of the π_ν-Bessel functions _π_ν,ψ_ν(x) does not require that π_ν is generic. Since the π_ν-kernel function k_π_ν,ψ_ν as in (<ref>) is defined based on the Godement-Jacquet theory, which does not require that π_ν is generic, the uniform result in Part (3) of Theorem <ref> suggests that
one may define the π_ν-kernel function k_π_ν,ψ_ν to be the π_ν-Bessel functions _π_ν,ψ_ν(x) in the p-adic case when π_ν is not generic. Finally, let us mention that the π_ν-Bessel functions _π_ν,ψ_ν(x) in the real case as given in Definition <ref> is more general than the one defined in <cit.>, and refer to Remark <ref> for details.
§.§ Godement-Jacquet kernels and Fourier transform
Write |k|=|k|_∞∪|k|_f, where |k|_∞ denotes the subset of |k| consisting of all Archimedean local places of k, and |k|_f denotes the subset of |k| consisting
of all finite local places of k.
Write _∞=∏_ν∈|k|_∞k_ν. For x=(x_ν)∈_∞, set
|x|_∞:=∏_ν∈|k|_∞|x_ν|_ν. Let =_k be the ring of algebraic integers of k. L. Clozel defines in <cit.> the Tate kernel:
H_s(x):=X^s-1∑_()≤ X()^-s-κ/1-s
for x∈_∞^× with X=|x|_∞, where runs over nonzero integral ideals ⊂, and κ=_s=1ζ_k(s). Here ζ_k(s)=∑_⊂()^-s is the Dedekind zeta function of k with
():=|N_k/|,
the absolute norm of ; and
the dual kernel
K_s(x):=D^-1/2
X^s-1∑_⊂𝒟^-1, ()≤ X()^-s-
κ· D^1/2/1-s,
where 𝒟=𝒟_k is the difference of and 𝒟^-1 is the inverse difference; and D=(𝒟) is the absolute value of the discriminant.
Theorem 1.1 of <cit.> expresses the relation between those tempered distributions H_s(x) and K_s(x) on _∞^× in terms of the condition: ζ_k(s)=0 with σ=(s)∈(0,1),
which is more precisely stated as follows.
Assume that σ=(s)∈(0,1). Then ζ_k(s)=0 if and only if
_∞(H_s)=-K_1-s
where _∞ is the usual Fourier transform over _∞ with a suitable normalized measure.
Let π be an irreducible cuspidal automorphic representation of _n() and write
π=π_∞⊗π_f, where π_f:=⊗_p<∞π_p, and write the standard L-function of π as
L(s,π)=L_∞(s,π_∞)· L_f(s,π_f)
for (s) sufficiently positive.
As usual, L(s,π) is called the complete L-function associated with π, and L_f(s,π_f) is called the finite part of the L-function associated with π.
The local and global theory of R. Godement and H. Jacquet in <cit.> introduces the global zeta integrals for L(s,π) and proves that
L(s,π) has analytic continuation to an entire function in s∈ and satisfies the functional equation
L(s,π)=ϵ(s,π)· L(1-s,π).
Following the reformulation as developed in <cit.>, for an irreducible cuspidal automorphic representation π of _n(), there exists a π-Schwartz space
_π(^×) as defined in (<ref>), which defines the _1-zeta integral
(s,ϕ)=∫_^×ϕ(x)|x|_^s-1/2^× x
for any ϕ∈_π(^×).
By <cit.> the zeta integral (s,ϕ) converges absolutely for (s)>n+1/2, admits analytic continuation to an entire function in s∈, and satisfies the functional equation
(s,ϕ)=(1-s,_π,ψ(ϕ)).
where _π,ψ is the π-Fourier transform as defined in (<ref>). From the global functional equation in (<ref>), we introduce the notion of the Godement-Jacquet kernels for L(s,π) in Definition <ref>, which can be briefly explained as follows.
Write x∈^× as x=x_∞· x_f with x_∞∈_∞^× and x_f∈_f^×.
Set
^>1:={x∈^× |x|_>1}.
For x∈^>1,
we have that |x|=|x_∞|_·|x_f|_>1 and |x_f|_>|x_∞|_^-1.
Write
_π(^×)=_π_∞(_∞^×)⊗_π_f(_f^×).
For ϕ=ϕ_∞⊗ϕ_f∈_π(^×) with ϕ_∞∈_π_∞(_∞^×) and ϕ_f∈_π_f(_f^×), we write
∫_^>1ϕ(x)|x|_^s-1/2^×x
=∫__∞^×ϕ_∞(x_∞)|x_∞|_^s-1/2^×x_∞∫__f^×^>|x_∞|^-1ϕ_f(x_f)|x_f|_^s-1/2^×x_f,
for any s∈, where the inner integral is taken over the domain { x_f∈_f^× |x_f|_> |x_∞|_^-1}.
Proposition <ref> shows that the integral converges absolutely for any s∈ and is holomorphic in s∈.
By the Fubini theorem and the support in _f^× of ϕ_f, which is a fractional ideal of k (Proposition <ref>) , the inner integral
∫__f^×^>|x_∞|_^-1ϕ_f(x_f)|x_f|_^s-1/2^×x_f
converges absolutely for any s∈ and any x_∞∈_∞^×. The Godement-Jacquet kernel for L(s,π) is defined by
H_π,s(x_∞,ϕ_f):=
|x_∞|_^s-1/2∫__f^×^>|x_∞|_^-1ϕ_f(x_f)|x_f|_^s-1/2^× x_f,
for x_∞∈_∞^× and for all s∈. L. Clozel defines in <cit.> the dual kernel for the case of _1 with π the trivial character.
We define here the dual kernel of the Godement-Jacquet kernel H_π,s(x_∞,ϕ_f) for L(s,π) to be
K_π,s(x_∞,ϕ_f):=
|x_∞|_^s-1/2∫__f^×^>|x_∞|_^-1_π_f,ψ_f(ϕ_f)(x_f)|x_f|_^s-1/2^× x_f,
for x_∞∈_∞^× and for all s∈. Proposition <ref> shows that both kernel functions H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f) on _∞^× can be extended uniquely to tempered distributions on _∞ for any ϕ_f∈_π_f(_f^×) and for any s∈, by using the work of
S. Miller and W. Schmid in <cit.>.
We are able to match the kernels H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f) with the Euler product expression or Dirichlet series expression of the finite part L-function L_f(s,π_f) by specifically choosing the π_f-Schwartz functions ϕ_f∈_π_f(_f^×), and prove in Section <ref> the π-versions of the Clozel theorem (Theorem <ref>). We refer to Theorems <ref> and <ref> for the details.
§.§ Organiztion of the paper
We recall the π-Poisson summation formula on _1 developed by Z. Luo and the first named author of this paper in <cit.> in Section <ref>. Sections <ref> and <ref> are to review briefly the local π-Schwartz spaces and local π-Fourier operators developed in <cit.> and <cit.>. Based on their work as well as the work of <cit.>, we recall the formulation of the π-Poission summation formula on _1 in <cit.> in Section <ref>.
Sections <ref> and <ref> are devoted to understand the duality between the function w_ν(x) and the function w_ν(x) by means of the harmonic analysis on _1 as developed in
<cit.> and <cit.>, and to prove our main local results (Theorem <ref>).
By comparing the Godement-Jacquet theory with the _n×_1 Rankin-Selberg convolution, we are able to express the dual function w_ν(x) of w_ν(x) in terms of the π_ν-Fourier transform _π_ν,ψ_ν up to certain normalization (Proposition <ref>), based on Proposition <ref> that identifies the π_ν-Schwartz space _π_ν(k_ν^×), as introduced in <cit.> and recalled in (<ref>), with the π_ν-Whittaker-Schwartz space _π_ν(k_ν^×) as defined in (<ref>). As a consequence, we obtain a formula
that express the dual function w_ν(x) of w_ν(x) as a convolution of the π_ν-kernel function k_π_ν,ψ_ν(x) with w_ν(x), up to certain normalization (Corollary <ref>).
In Section <ref>, we introduce the notion of π_ν-Bessel functions _π_ν,ψ_ν(x) (Definitions <ref>, <ref> and <ref>) and prove the precise relation between the π_ν-Bessel functions
and π_ν-kernel functions as defined in (<ref>) (Propositions <ref>, <ref>, and <ref>). In the p-adic case, the π_ν-Bessel function _π_ν,ψ_ν(x) on
k_ν^× is introduced following the work of E. Baruch in <cit.>, which is recalled in Section <ref>. In the real or complex case, we introduce the π_ν-Bessel function _π_ν,ψ_ν(x) on k_ν^× by following the general theory of Bessel functions of high rank by Z. Qi in <cit.>. It should be mentioned that the π_ν-Bessel function _π_ν,ψ_ν(x) on ^× in the real case is more general than the one considered in <cit.> (Remark <ref>). As expected, when n=2 our results recover the previous known
results as discussed by J. Cogdell in <cit.> and by D. Soudry in <cit.>.
With all these ingredients, we are able to give a new proof of the Voronoi formula for _n (Theorem <ref>) as proved in <cit.> in Section <ref>. In fact, the proof of the Voronoi formula in <cit.> is based on the Rankin-Selberg convolution for _n×_1 in <cit.>, <cit.> and <cit.>. And our proof is based on the Godement-Jacquet theory in <cit.>
and its reformulation in <cit.>. The main idea is that the Voronoi summation formula for _n as in Theorem <ref> is a special case of the π-Poission summation formula
on _1 as in Theorem <ref> (<cit.>), after the long computations carried out in Section <ref> of this paper and in <cit.>. Those computations enable us to
express the summands on the dual side (the right-hand side) of the Voronoi formula in Theorem <ref> as the global π-Fourier transform of the summands on the given side (the left-hand side),
which is Proposition <ref>.
In Section <ref>, in order to define the Godement-Jacquet kernels H_π,s(x,ϕ_f) and their dual kernels K_π,s(x,ϕ_f) (Definition <ref>) for any irreducible cuspidal automorphic
representation π of _n(), we develop further properties (Propositions <ref> and <ref>, and Corollary <ref>) of the global zeta integrals (s,ϕ), as defined in (<ref>), by using the π-Fourier transform _π,ψ and the associated π-Poisson summation formula as developed in <cit.>.
In Proposition <ref>, we show that both kernel functions H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f) on _∞^× can be extended uniquely to tempered distributions on _∞ for any ϕ_f∈_π_f(_f^×) and for any s∈. In Section <ref>, guided by Theorem <ref>, we prove in Proposition <ref> that if s∈ is a zero of L_f(s,π_f), then the kernel H_π,s(x_∞,ϕ_f) is equal to the negative of
π_∞-Fourier transform of the dual kernel K_π,1-s(x_∞,ϕ_f). For any ϕ_∞∈_π_∞(_∞^×), take ϕ^⋆=ϕ_∞⊗ϕ_f^⋆, where ϕ_f^⋆:=⊗_νϕ_ν with ϕ_ν as given in Proposition <ref>. Theorem <ref> proves
the π-version of Theorem <ref> for the Euler product expression of L_f(s,π_f). With the help of Lemma <ref>, we obtain the Dirichlet series expression of the kernels in Propositions <ref> and <ref>. Finally, Theorem <ref> establishes the π-version of Theorem <ref> for the Dirichlet series expression of L_f(s,π_f).
The paper is finalized when both authors are visiting the Institute for Advanced Study in Mathematics, Zhejiang University. We would like to thank Professors Jianshu Li and Binyong Sun for invitation and the Institute for providing wonderful research environment and warm hospitality.
§ Π-POISSON SUMMATION FORMULA
We recall from <cit.> the π-Schwartz spaces and the π-Fourier operators both for the local and global cases and the π-Poisson summation formula for any irreducible cuspidal automorphic representation π of _n(), where is the ring of adeles of a number field k.
§.§ π-Schwartz functions
Let |k| be the set of all local places of k.
For any local place ν, we denote by F=k_ν, the local field of k at ν.
If F is non-Archimedean, we denote by =_F the ring of integers and by =_F the maximal ideal of .
Let _n=_n be the general linear group defined over F. Fix the following maximal compact subgroups K of _n(F):
K={[ _n(_F), F ,; (n), F=,; (n), F=. ].
Let _n(F) be the space of all n× n matrices over F and (_n(F)) be the space of Schwartz functions on _n(F). When F is Archimedean, it is the space of usual Schwartz functions
on the affine space _n(F), and when F is p-adic, it consists of all locally constant, compactly supported functions on _n(F).
Let |·|_F be the normalized absolute value on the local field F, which is the modular function of the multiplication of F^× on F with respect to the self-dual additive Haar measure ^+x on F. As a reformulation of the local Godement-Jacquet theory in <cit.>, the (standard) Schwartz space on _n(F) is defined to be
_(_n(F)):={ξ∈^∞(_n(F)) | g|_F^-n/2·ξ(g)∈(_n(F))},
where ^∞(_n(F)) denotes the space of all smooth functions on _n(F).
By <cit.>, the Schwartz space _(_n(F)) is a subspace of L^2(_n(F), g), which is the space of square-integrable functions on _n(F).
Consider the determinant map
=_F_n(F)=_n(F)→ F.
When restricted to F^×, we obtain that
=_F_n(F)=_n(F)→ F^×
and the fibers of the determinant map are of the form:
_n(F)_x:=
{g∈_n(F) g=x}.
When x=1, the fiber is the kernel of the map, i.e. ()=_n(F). In general, each fiber _n(F)_x is an _n(F)-torsor.
Let ^+g be the self-dual Haar measure on _n(F) with respect to the standard Fourier transform defined by (<ref>) below. On _n(F), we fix
the Haar measure
g = | g|_F^-n·^+g
Let _1 g be the induced Haar measure g from _n(F) to _n(F). It follows that the Haar measure _1 g induces an _n(F)-invariant measure _x g on each fiber _n(F)_x.
Let Π_F(_n) be the set of equivalence classes of irreducible smooth representations of _n(F) when F is non-Archimedean;
and of irreducible Casselman-Wallach representations of _n(F) when F is Archimedean. For π∈Π_F(_n), we denote by (π) the space of all matrix coefficients of π.
Write ξ=| g|_F^n/2· f(g)∈_(_n(F)) with some f∈(_n(F))
as in (<ref>). For φ_π∈(π), as in <cit.>,
we define
ϕ_ξ,_π(x) := ∫__n(F)_xξ(g)_π(g)_x g
=
|x|_F^n/2∫__n(F)_x
f(g)_π(g)_x g.
By <cit.>, the function ϕ_ξ,_π(x) is absolutely convergent for all x∈ F^× and is smooth over F^×.
As in <cit.>, for any π∈Π_F(_n), the space of π-Schwartz functions is defined as
_π(F^×) = {ϕ=ϕ_ξ,_π∈^∞(F^×) ξ∈_(_n(F)),_π∈(π)}.
By <cit.>, we have
_c^∞(F^×)⊂_π(F^×)
⊂^∞(F^×).
§.§ π-Fourier transform
Let ψ=ψ_F be a fixed non-trivial additive character of F.
The (standard) Fourier transform _ψ on (_n(F)) is defined as follows,
_ψ(f)(x) = ∫__n(F)ψ((xy))f(y)^+y.
It is well-known that the Fourier transform _ψ extends to a unitary operator on the space L^2(_n(F),^+x) and satisfies the following identity:
_ψ∘_ψ^-1 =.
Following the reformulation of the local Godement-Jacquet theory in <cit.>, the Fourier transform _ψ on (_n(F)) yields
a (nonlinear) Fourier transform _ on _(_n(F)), which is a convolution operator with the distribution kernel:
Φ_(g):=ψ( g)·| g|_F^n/2.
More precisely, the Fourier transform _ is defined to be
_(ξ)(g):=(Φ_*ξ^∨)(g)
for any ξ∈_(_n(F)), where ξ^∨(g):=ξ(g^-1). From <cit.>, a relation between
the (nonlinear) Fourier operator _ and the (classical or linear) Fourier transform _ψ is given by
_(ξ)(g)=(Φ_*ξ^∨)(g)
=
| g|_F^n/2·_ψ(| g|_F^-n/2ξ)(g).
From the proof of <cit.>, it is easy to obtain that
(Φ_*ξ^∨)(g)
=
| g|_F^n/2(ψ((·))*(|(·)|_F^n/2ξ)^∨)(g)
for any ξ∈_(_n(F)).
As in <cit.>, the π-Fourier transform _π,ψ is defined through the following diagram:
_(_n(F))⊗(π)[d][rrr]^(_,(·)^∨) _(_n(F))⊗(π)[d]
_π(F^×) [rrr]^_π,ψ _π(F^×)
More precisely, for ϕ=ϕ_ξ,_π∈_π(F^×) with a ξ∈_(_n(F)) and
a _π∈(π), the π-Fourier transform _π,ψ is defined by
_π,ψ(ϕ)=_π,ψ(ϕ_ξ,_π):=ϕ__(ξ),_π^∨,
where _π^∨(g)=_π(g^-1)∈(π). It was verified in <cit.> that the descending π-Fourier transform _π,ψ is well defined.
From <cit.>, the π-Fourier transform _π,ψ can also be represented as a convolution operator by some kernel function k_π,ψ,
which is explicitly given as follows.
We fix a φ_π∈(π) with φ_π(_n)=1. We also
choose a sequence of test functions {_ℓ}_ℓ=1^∞⊂^∞_c(_n(F)), such that for any h∈_c^∞(_n(F)),
lim_ℓ→∞∫__n(F)_ℓ(g)h(g) g= h(_n).
In other words, the sequence {_ℓ}_ℓ=1^∞ tends to the delta mass supported at the identity _n as ℓ→∞.
The π-kernel function k_π,ψ(x) is defined as
k_π,ψ(x)
:=
∫^_ g=xΦ_(g)_π(g)_x g=
|x|^n/2_F∫^_ g=xψ((g))
_π(g)_x g
where Φ_ is the kernel function as defined in (<ref>) and the integral is regularized as follows:
∫^_ g=xΦ_(g)_π(g)_x g
:=
lim_ℓ→∞∫_ g=x(Φ_*_ℓ^∨)(g)
_π(g)
_xg.
It is shown in <cit.> that k_π,ψ is a smooth function on F^× and is independent of the choice of the matrix coefficient φ_π and the chosen sequence {_ℓ}_ℓ=1^∞ that tends to the delta mass supported at _n.
By <cit.>, we have that for any ϕ∈_c^∞(F^×)
_π,ψ(ϕ)(x)=(k_π,ψ*ϕ^∨)(x).
Following <cit.>, one may call the π-Fourier transform _π,ψ a generalized Hankel transform or the π-Hankel transform.
§.§ π-Poisson summation formula on _1
Recall that |k| is the set of all local places of the number field k. Let |k|_∞ be the subset of |k| consisting of all Archimedean local places of k. We may write
|k|=|k|_∞∪|k|_f,
where |k|_f is the set of non-Archimedean local places of k.
Let Π_(_n) be the set of equivalence classes of irreducible admissible representations
of _n(). We write π=⊗_ν∈|k|π_ν and assume that
π_ν∈Π_k_ν(_n) and at almost all finite local places ν the local representations π_ν are unramified. This means that when ν∈ |k|_f, π_ν is an irreducible admissible representation of _n(k_ν), and when ν∈|k|_∞, π_ν is of Casselman-Wallach type as a representation of _n(k_ν).
Let (_n)⊂Π_(_n) be the subset consisting of equivalence classes of irreducible admissible automorphic representations
of _n(), and _(_n) be the subset of cuspidal members in (_n). We refer to <cit.> or <cit.> for the notation and definition of automorphic representations.
Take any π=⊗_ν∈|k|π_ν∈Π_(_n). For each local place ν∈ |k|, the π_ν-Schwartz space _π_ν(k_ν^×) is defined as in (<ref>). Recall from <cit.> that the basic function _π_ν∈_π_ν(k_ν^×) is defined when the local component π_ν of π is unramified. Then the π-Schwartz space on ^× is defined to be
_π(^×):=⊗_ν∈|k|_π_ν(k_ν^×),
which is the restricted tensor product of the local π_ν-Schwartz space _π_ν(k_ν^×) with respect to the family of the basic functions _π_ν for all the local places ν at which
π_ν are unramified. The factorizable vectors ϕ=⊗_νϕ_ν in _π(^×) can be written as
ϕ(x)=∏_ν∈|k|ϕ_ν(x_ν), x=(x_ν)_ν.
Here at almost all finite local places ν, ϕ_ν(x_ν)=_π_ν(x_ν). According to the normalization (<cit.>), we have that _π_ν(x_ν)=1 when x_ν∈_ν^×, the unit group of
the ring _ν of integers at ν. Hence for any given x∈^×, the product in (<ref>) is a finite product.
For any factorizable vectors ϕ=⊗_νϕ_ν in _π(^×), we define the
π-Fourier transform (or operator):
_π,ψ(ϕ):=⊗_ν∈|k|_π_ν,ψ_ν(ϕ_ν).
Here at each ν∈|k|, _π_ν,ψ_ν is the local π_ν-Fourier transform as defined in
(<ref>) and (<ref>), which takes the π_ν-Schwartz space
_π_ν(k_ν^×) to the π_ν-Schwartz space _π_ν(k_ν^×),
and has the property that
_π_ν,ψ(_π_ν)=_π_ν
when the data are unramified at ν (see <cit.>). Hence the Fourier transform _π,ψ as defined in
(<ref>) maps the π-Schwartz space _π(^×) to the π-Schwartz space _π(^×), where π∈Π_(_n) is the
contragredient of π. The π-Poisson summation formula (<cit.>) can be stated as below.
For any π∈_(_n), the π-theta function
Θ_π(x,ϕ):=∑_α∈ k^×ϕ(α x)
converges absolutely and locally uniformly for any x∈^× and any ϕ∈_π(^×).
Let π∈_(_n) be the contragredient of π. Then the following identity
Θ_π(x,ϕ)
=
Θ_π(x^-1,_π,ψ(ϕ)),
holds as functions in x∈^×, where _π,ψ is the π-Fourier transform as defined in
(<ref>) that takes _π(^×) to _π(^×).
§ LOCAL HARMONIC ANALYSIS
In this section, we take F=k_ν to be a local field of characteristic zero and fix a non-trivial additive character ψ=ψ_F of F. Since the representations
π∈Π_F(_n) considered in this section are the local components of irreducible cuspidal automorphic representations of _n(), we may only consider
generic π∈Π_F(_n) without loss of generality.
Let B_n=T_nN_n be the Borel subgroup of _n, which consisting of all upper-triangular matrices of _n, where T_n is the maximal torus consisting of all
diagonal matrices of _n, and N_n is the unipotent radical of B_n, which consists of matrices n=(n_i,j) with n_i,j=0 if 1≤ j<i≤ n, and
n_i,i=1 for i=1,2,…,n. Without loss of generality, we may take a generic character as
ψ(n)=ψ_N_n(n)=ψ_F(n_1,2+n_2,3+⋯+n_n-1,n).
Let ℓ_ψ be a non-zero member in _N_n(F)(π,ψ), which is one-dimensional if π∈Π_F(_n) is generic. For any v∈ V_π, define the Whittaker function by
W_v(g):=ℓ_ψ(π(g)v).
Let (π,ψ) be the Whittaker model of π, which consisting of Whittaker functions W_v(g) with v runs through the space V_π of π. Let V_π^∞ be
the subspace of V_π consisting of all smooth vectors of V_π. We define the π-Whittaker-Schwartz space on F^× to be
_π,ψ(F^×):={ω(x):=|x|^1-n/2· W_v( [ x ; _n-1 ]) v∈ V_π^∞},
where |·|=|·|_F is the normalized absolute value on F.
For any π∈Π_F(_n), which is generic, the π-Schwartz space and the π-Whittaker-Schwartz space coincide with each other:
_π(F^×)=_π,ψ(F^×).
We first show that
_π,ψ(F^×)⊂_π(F^×).
For any unitary character χ of F^× and W∈(π,ψ), the local Rankin-Selberg integral for _n×_1
Ψ(s,W,χ):=∫_F^× W( [ x ; _n-1 ])χ(x)|x|^s-n-1/2 ^×x
=∫_F^×ω(x)χ(x)|x|^s-1/2 ^×x,
where ω(x)∈_π,ψ(F^×) as in Proposition <ref>, is absolutely convergent when (s) is sufficiently positive and the fractional ideal generated by all such integrals is [q^-s,q^s]L(s,π×χ) by <cit.> for the non-Archimedean case and a holomorphic multiple of L(s,π×χ), bounded at infinity in vertical strips due to <cit.> for Archimedean case.
According to <cit.>, there is some ϕ∈_π(F^×) such that
Ψ(s,W,χ)=(s,ϕ,χ):=∫_F^×ϕ(x)χ(x)|x|^s-1/2^×x
when (s) is sufficiently positive. In particular, fix a s_0∈ sufficiently positive such that both functions ϕ(·)|·|^s_0-1/2 and ω(·)|·|^s_0-1/2 belong to L^1(F^×), the space of L^1-functions on F^×.
It follows that
∫_F^×(ϕ(x)|x|^s_0-1/2-ω(x)|x|^s_0-1/2)χ(x)^× x=0
for all unitary character χ of F^×. From the general theory about absolutely continuous measures on local compact abelian groups (See <cit.> for instance), we must have that
ϕ(x)|x|^s_0-1/2-ω(x)|x|^s_0-1/2 =0
for a.e. x∈ F^×, which implies
ω(x)=ϕ(x)
for all x∈ F^× since both functions are smooth. Hence we obtain that _π,ψ(F^×)⊂_π(F^×).
Again, by <cit.> and the local theory of the Rankin-Selberg convolution of _n×_1 as in <cit.> for the non-Archimedean case and in
<cit.> for Archimedean case, we can repeat the above discussion to prove that
_π(F^×)⊂_π,ψ(F^×).
Hence we get that
_π(F^×)=_π,ψ(F^×).
From Proposition <ref>, the following assertion is clear, since the π-Schwartz space _π(F^×) is independent of the choice of the character ψ.
The space of π-Whittaker-Schwartz functions _π,ψ(F^×) defined in (<ref>) is independent of the choice of the character ψ.
By Corollary <ref>, we may denote by _π(F^×) the π-Whittaker-Schwartz space on F^× as defined in (<ref>).
After identifying the π-Schwartz space _π(F^×) with the π-Whittaker-Schwartz space _π(F^×), we are going to understand
the π-Fourier transform
_π,ψ_π(F^×) →_π(F^×)
in terms of the structure of Whittaker models.
For ϕ∈_π(F^×), we may write as in (<ref>) that
ϕ(x)=ω(x)=W( [ x ; _n-1 ])|x|^1-n/2
for some W∈(π,ψ). Then the π-Fourier transform can be expressed by the following formula:
_π,ψ(ϕ)(x)=_π,ψ(ω)(x)=|x|^1-n/2∫_F^n-2(π(w_n,1)W)[ x ; y _n-2 ; 1 ] y,
where W(g):=W(w_0 ^tg^-1) for any g∈_n(F) is a Whiitaker function in (π,ψ^-1) and
w_n,1=[ 1 ; w_n-1 ].
Here we denote by w_m the longest Weyl element of _m=_m, which is defined inductively by
w_m=[ 1; w_m-1 ], with w_2=[ 1; 1 ].
From the functional equation for the local zeta integrals (s,ϕ,χ) as proved in <cit.>, we have that
Ψ(s,W,χ)=(s,ϕ,χ)=(1-s,_π,ψ(ϕ),χ^-1)γ(s,π×χ,ψ)^-1.
On the other hand, from the functional equation for the local zeta integrals Ψ(s,W,χ) as proved in <cit.> for the non-Archimedean case and in <cit.> for the Archimedean case, we have that
Ψ(s,W,χ)=γ(s,π×χ,ψ)^-1∫_F^×∫_F^n-2(π(w_n,1)W)[ x ; y _n-2 ; 1 ] y χ^-1(x)|x|^3-n/2-s^×x.
From the absolute convergence of the local zeta integrals (s,ϕ,χ) and Ψ(s,W,χ), we may choose and fix a s_0∈ with (s_0) sufficiently negative,
such that both functions
_π,ψ(ϕ)(·)|·|^1/2-s_0
and
∫_F^n-2(π(w_n,1)W)[ · ; y _n-2 ; 1 ] y·|·|^3-n/2-s_0
belong to L^1(F^×). It follows that
∫_F^×( _π,ψ(ϕ)(x)|x|^1/2-s_0- ∫_F^n-2(π(w_n,1)W)[ x ; y _n-2 ; 1 ] y|x|^3-n/2-s_0) χ^-1(x)^×x=0
for any unitary character χ. Now we use the same argument as in the proof of Proposition <ref> to deduce that
_π,ψ(ϕ)(x)=|x|^1-n/2∫_F^n-2(π(w_n,1)W)[ x ; y _n-2 ; 1 ] y
for any x∈ F^×, as they are smooth in x.
In particular, in the case n=2, we have a much simpler formula.
When n=2, the action of the longest Weyl group element w_2 of _2 on the Kirillov model of π is given by the
(non-linear) Fourier transform _π,ψ:
_π,ψ(ϕ)=π(w_2)(ϕ)·ω_π^-1, for ϕ∈_π(F^×),
where the π-Schwartz space _π(F^×) and the π-Whittaker-Schwartz space _π(F^×) can be identified with the Kirillov model of π by Proposition <ref> and ω_π is the central character of π.
According to Proposition <ref>, let ϕ(x)=W( [ x ; 1 ]),
we have
_π,ψ(ϕ)(x)=W(w_2 [ x^-1 ; 1 ])=W( [ x^-1 ; x^-1 ][ x ; 1 ]w_2)=ω_π(x^-1)(π(w_0)ϕ)(x).
According to <cit.>, for any w(x)∈_c^∞(F^×), there is a unique smooth function w(x) on F^× of rapid decay at infinity and with at most polynomial growth at zero such that the local functional equation (<ref>) holds as meromorphic functions in s∈. The map w(x)↦w(x) is
called the Bessel transform in <cit.>. Some more discussions and explicit formulas related to this map were given in <cit.> based on the local functional equation of the Rankin-Selberg convolution for _n×_1 from <cit.> and <cit.>. Over the Archimedean local fields, the map w(x)↦w(x) has been
studied in <cit.> in the framework of Hankel transforms with the Bessel functions of high rank as the kernel functions.
The following result says that the map w(x)↦w(x) is given by the π-Fourier transform up to certain normalization.
The dual function w(x) of w(x)∈_c^∞(F^×) as defined by (<ref>) can be expressed in terms of π-Fourier transforms:
_π,ψ(w(·)|·|^1-n/2)=w(·)|·|^1-n/2.
Since w(x)∈_c^∞(F^×), we have w(x)|x|^1-n/2∈_c^∞(F^×) as well.
By <cit.> and as in the proof of Proposition <ref>, the right-hand side of (<ref>) can be written as
γ(1-s,π×χ,ψ)∫_F^×w(y)χ(y)|y|^1-s-n-1/2^× y=γ(1-s,π×χ,ψ)(1-s,w(·)|·|^1-n/2,χ).
By the local functional equation in <cit.>, we have
γ(1-s,π×χ,ψ)(1-s,w(·)|·|^1-n/2,χ)=(s, _π,ψ(w(·)|·|^1-n/2),χ^-1)
It follows that the left-hand side of (<ref>) can be written as
∫_F^×w(y)χ^-1(y)|y|^s-n-1/2^× y
=
∫_F^×_π,ψ(w(·)|·|^1-n/2)(y)χ^-1(y)|y|^s-1/2^× y
as meromorphic functions in s∈. Since the integrals on both sides of the above equation converge absolutely for (s) sufficiently negative, we choose one of such s_0∈ and fix it such that the two smooth functions
w(y)|y|^s_0-n-1/2 and _π,ψ(w(·)|·|^1-n/2)(y)|y|^s_0-1/2
belong to L^1(F^×). Again, by the general theory as in <cit.>, we obtain that
w(y)|y|^s_0-n-1/2=_π,ψ(w(·)|·|^1-n/2)(y)|y|^s_0-1/2,
which implies that
_π,ψ(w(·)|·|^1-n/2)(x)=w(x)|x|^1-n/2.
as functions on F^× that are smooth, of rapid decay at infinity, and with at most polynomial growth at zero.
Combining Proposition <ref> with the formula in (<ref>), we obtain a formula for w(x) for any w∈^∞_c(F^×).
For any π∈Π_F(_n), the dual function w(x) associated with any w∈^∞_c(F^×) is given by the following formula:
w(x)=|x|_F^n/2-1(k_π,ψ(·)*(w^∨(·)|·|_F^n/2-1))(x),
where k_π,ψ(x) is the π-kernel function associated with π as in (<ref>) and w^∨(x)=w(x^-1).
§ Π-BESSEL FUNCTIONS
The π-Fourier transform _π,ψ can be expressed as a convolution operator with the π-kernel function k_π,ψ as in (<ref>) using the
structures of the π-Schwartz space _π(F^×) and the π-Schwartz space _π(F^×). When consider
the π-Fourier transform _π,ψ as a transformation from the π-Whittaker-Schwartz space _π(F^×) to π-Whittaker-Schwartz space
_π(F^×), we are able to show that the π-Fourier transform _π,ψ can be expressed as a convolution operator with certain Bessel functions as
the kernel functions. We do this for the Archimedean case and non-Archimedean case, separately.
§.§ π-Bessel functions: p-adic case
Let us first consider the case that F is non-Archimedean. In this case, a basic theory of Bessel functions was developed by E. Baruch in <cit.>, from which we recall some
important definitions and results on Bessel functions in order to understand the π-Fourier transform.
Let Φ={α_i,j=e_i-e_j| 1≤ i< j≤ n } be the roots of _n with respect to the F-split maximal torus T_n, Φ^+={α_i,j| i<j } be the set of positive roots with respect to B_n and Φ^-={α_i,j| i>j } be the corresponding set of negative roots. Let Δ={α_i,i+1| 1≤ i≤ n-1 } be the set of simple roots.
Let be the Weyl group of _n. For every w∈, denote
S(w)={α∈Δ| w(α)<0 } and S^∘(w)=S(ww_n),
where w_n is the longest Weyl element of _n as in Proposition <ref>. We also write
S^-(w)={α∈Φ^+| w(α)<0 } and S^+(w)={α∈Φ| w(α)>0}.
Let N_w^- (N_w^+ resp.) be the unipotent subgroup associated to S^-(w) (S^+(w) resp.). Let
T_w={t∈ T_n|ψ(u)=ψ ( w(t)uw(t)^-1 ) , ∀ u∈ N_w^-}.
For every λ∈ X(T_n)⊗_, where X(T_n) is the character group of T_n, define
|λ|(t):=|λ(t)|_F, ∀ t∈ T_n.
Recall from Section <ref> that K=_n() is the maximal open compact subgroup of _n. With the Iwasawa decomposition _n=N_nT_nK, for any g=utk, we set
|λ|(g):=|λ|(t).
It is easy to check that this is well defined.
Let π∈Π_F(_n) be generic and (π,ψ) be the space of Whittaker functions.
Following <cit.>, we denote by ^∘(π,ψ) the set of functions W∈(π,ψ) such that for every w∈ and every α∈ S^∘(w), there exist positive constants D_α<E_α such that if g∈ B_nwB_n then W(g)≠ 0 implies that
D_α<|α|(g)<E_α.
For a positive integer m, we denote K_m the congruence subgroup given by
K_m=_n+M_n(^m).
Write
by
d=[ 1 ; ϖ^2 ; ϖ^4 ; 135⋯ ; ϖ^2n-2 ],
where ϖ is a fixed uniformizer of F. Let N_n(m)=N_n∩(d^mK_md^-m). For any W∈(π,ψ), denote
W_m(g)=∫_N_n(m)W(gn)ψ^-1(n) n.
According to <cit.>, W_m∈^∘(π,ψ) for all sufficiently large m. Due to <cit.>, for m large enough, the integral
1/vol(N_n(m)) ∫_N_w^-W_m( g
n)ψ^-1(n) n
converges and is independent of m for g∈ N_nT_wwN_w^-. Moreover, by the uniqueness of Whittaker functionals, it follows that there exists a function, which we denote by j_π,ψ,w(g) such that
1/vol(N_m) ∫_N_w^-W_m( g
n)ψ^-1(n) n=j_π,ψ,w(g)W(_n)
for g∈ N_nT_wwN_w^-. This function j_π,ψ,w(g) was called the Bessel function of π attached to the Weyl group element w in <cit.>.
Moreover, if W∈^∘(π,ψ), then the integral
∫_N_w^- W(gn)ψ^-1(n) n
converges absolutely for g∈ N_nT_wwN_w^- and the Bessel function j_π,ψ,w(g) has the following integral representation:
j_π,ψ,w(g)· W(_n)=∫_N_w^- W(gn)ψ^-1(n) n
according to <cit.>.
Let F be a non-Archimedean local field. Define
^∘_π,ψ(F^×):={ω(x)=|x|^1-n/2· W( [ x ; _n-1 ]) W∈^∘(π,ψ) }.
Then this space can be identified with the space _c^∞(F^×):
^∘_π,ψ(π,ψ)=_c^∞(F^×).
In particular, the space ^∘_π,ψ(π,ψ) is independent of the choice of the character ψ.
We first prove that
{ W( [ · ; _n-1 ]) : W∈^∘(π,ψ) }=_c^∞(F^×).
Take w_*=[ _n-1; 1 ]∈. Then we have that
α_1,2=e_1-e_2∈ S(w_*w_n). It follows that there are positive constants D and E such that
W( [ x ; _n-1 ])≠ 0 implies that
D<|α_1,2|([ x ; _n-1 ])=|x|<E,
which implies that
{ W( [ · ; _n-1 ]) : W∈^∘(π,ψ) }⊂_c^∞(F^×).
On the other hand, take W∈(π,ψ)≠ 0, for any positive integer m, we have
W_m(_n)=∫_N_mW(n)ψ^-1(n) n=Vol(N_m)≠0,
and since W_m∈^∘(π,ψ) for m sufficiently large, we get
{ W( [ · ; _n-1 ]) W∈^∘(π,ψ) }≠ 0.
According to <cit.>, ^∘(π,ψ) is invariant under right translations by B_n, in particular, for
b^'=[ b ; _n-2 ]∈ B,
and W∈^∘(π,ψ),
where
b=[ t n; 1 ]
we have
(bW)([ x ; _n-1 ])=ψ(nx)W([ tx ; _n-1 ]).
According to <cit.>, _c^∞(F^×) is an irreducible representation under the above action. Hence we obtain that
{ W( [ · ; _n-1 ]) W∈^∘(π,ψ) }=_c^∞(F^×).
Note that f↦ f|·|^1-n/2 is a bijection from _c^∞(F^×) to itself. Therefore we obtain that
^∘_π,ψ(F^×)=_c^∞(F^×).
In order to understand the π-Fourier transform _π,ψ and the associated π-kernel function k_π,ψ as in (<ref>) in terms of
the π-Whittaker-Schwartz space _π(F^×) to π-Whittaker-Schwartz space _π(F^×),
we define the π-Bessel function of π on F^×, which is related to the one attached to the particular Weyl element w_*=[ _n-1; 1 ]∈, up to normalization.
Let F be a non-Archimedean local field of characteristic zero. For any π∈Π_F(_n), which is generic, the associated π-Bessel function _π,ψ(x)
on F^× is defined by
_π,ψ(x)=|x|_F^1-n/2· j_π,ψ,w_*( [ _n-1; x^-1 ])
where w_*=[ _n-1; 1 ]∈ is a Weyl group element of _n
For any π∈Π_F(_n), which is generic,
as functions on F^×, the π-kernel function k_π,ψ as in (<ref>) and the π-Bessel function as defined in (<ref>) are related by the following identity:
k_π,ψ(x)=_π,ψ(x)|x|^1/2
for any x∈ F^×.
For any ϕ∈_c^∞(F^×)⊂_π(F^×), we know from Proposition <ref> that there is some W∈^∘(π,ψ) such that
ϕ(x)=W([ x ; _n-1 ])|x|^1-n/2
for any x∈ F^×.
According to Proposition <ref>, we have
_π,ψ(ϕ)(x)
=|x|^1-n/2∫_F^n-2(π(w_n,1)W)([ x ; y _n-2 ; 1 ]) y
=|x|^1-n/2∫_F^n-2 W( w_*
[ x^-1 ; _n-1 ][ 1 0 y_1 ⋯ y_n-2; 0 1 ⋯ 0 0; 90⋯ 90⋯ 135⋯ 90⋯ 90⋯; 0 0 ⋯ 1 0; 0 0 ⋯ 0 1 ]) y
as w_*=w_nw_n,1.
According to <cit.>, the function
(z,y_1,y_2,⋯,y_n-2)↦ W( w_*
[ x^-1 ; _n-1 ][ 1 z y_1 ⋯ y_n-2; 0 1 ⋯ 0 0; 90⋯ 90⋯ 135⋯ 90⋯ 90⋯; 0 0 ⋯ 1 0; 0 0 ⋯ 0 1 ])
is compactly supported once we fix x. Hence the function
f(z,x):=∫_F^n-2 W( w_*
[ x^-1 ; _n-1 ][ 1 z y_1 ⋯ y_n-2; 0 1 ⋯ 0 0; 90⋯ 90⋯ 135⋯ 90⋯ 90⋯; 0 0 ⋯ 1 0; 0 0 ⋯ 0 1 ]) y.
belongs to the space _c^∞(F), as a function in z with x fixed, and its Fourier transform f along z at 1
f(1,x)=∫_Ff(z,x)ψ^-1(z) z
exists. For the Weyl group element w_*, it is easy to check that
N_w_*^-={[ 1 z y_1 ⋯ y_n-2; 0 1 ⋯ 0 0; 90⋯ 90⋯ 135⋯ 90⋯ 90⋯; 0 0 ⋯ 1 0; 0 0 ⋯ 0 1 ]| z,y_1,⋯,y_n-2∈ F},
from which we deduce the following formula for f(1,x):
f(1,x)=∫_N_w_*^-W( w_*[ x^-1 ; _n-1 ]n)ψ^-1(n) n
where ψ(n)=ψ(z). By (<ref>), we obtain that
f(1,x)=j_π,ψ,w_*(w_*[ x^-1 ; _n-1 ])· W(_n)=j_π,ψ,w_*([ _n-1; x^-1 ])· W(_n).
From the definition of the Bessel function j_π,ψ(x) in (<ref>), we obtain that
f(1,x)=_π,ψ(x)|x|^n-1/2W(_n)
as functions in x∈ F^×. Now we calculate for a fixed x∈ F^×, the Fourier transform f(t,x) with t∈ F^×,
f(t,x) =∫_Ff(z,x)ψ^-1(tz) z
=∫_N_w_*^-
W( w_*
[ (xt)^-1 ; _n-1 ]n
[ t ; _n-1 ])ψ^-1(z) n·|t|^1-n
=|t|^1-n_π,ψ(xt)|xt|^n-1/2π( [ t ; _n-1 ])W(_n).
According to <cit.>, we can apply the Fourier inversion formula to obtain
f(0,x) =∫_Ff(t,x) t
=∫_F _π,ψ(xt)|xt|^n-1/2W( [ t ; _n-1 ]) |t|^1-n t
=∫_F^×_π,ψ(xt)|xt|^n-1/2ϕ(t)|t|^1-n/2^×t.
Hence we obtain from the above calculation that
_π,ψ(ϕ)(x)=|x|^1-n/2f(0,x)=∫_F^×_π,ψ(xt)|tx|^1/2ϕ(t)^×t.
On the other hand, we know from <cit.> that
_π,ψ(ϕ)(x)=∫_F k_π,ψ(xt)ϕ(t)^×t
for any ϕ∈_c^∞(F^×). Therefore, as distributions on F^×, we obtain
k_π,ψ(x)=_π,ψ(x)|x|^1/2
for any x∈ F^×. Since both functions are smooth, the identity holds as functions in x∈ F^×.
Recall that in the _2 case, D. Soudry defined in <cit.> the Bessel function J_π(x) on F^× by the following equation
∫_F W( [ x; -1 ][ 1 y; 1 ])ψ^-1(y) y =J_π(x)W(_2)
for all W∈(π,ψ), where the integral converges in the sense that it stabilizes for large compacts as in <cit.>.
By an elementary computation, we see the relation between these two Bessel functions is
J_π(x)=ω_π(x)_π,ψ(-x)|x|^1/2
In <cit.>, Soudry computes the Mellin transform of the product of two Bessel functions instead of showing the gamma factor is the Mellin transform of J_π.
In fact, we have
∫_F^×^pvJ_π(y)χ^-1(y)|y|^-s^×y=ω_π(-1)χ(-1)γ(1/2,π×ω_π^-1χ_s,ψ).
<cit.> tells us
∫_F^×^pvk_π,ψ(y)χ^-1(y)|y|^-s^×y=γ(1/2,π×χ_s,ψ).
Taking into account their relations, we can obtain what we want.
We refer to <cit.> for further discussion of the _2-Bessel functions and related topics.
§.§ π-Bessel functions: complex case
If F=, let us first recall from <cit.> the classification of irreducible admissible representations of _n=_n(). For z∈, let [z]=z/√(zz) and |z|_=zz, where z is the complex conjugate of z. For any l∈ and t∈, let σ=σ(l,t) be the representation of _1() given by
z↦[z]^l|z|_^t,
which we write [·]^l⊗|·|_^t. For each j with 1≤ j≤ n, let σ_j be the representation [·]^l_j⊗|·|_^t_j of _1(). Then (σ_1,⋯,σ_n) defines a one-dimensional representation of the diagonal maximal torus T_n of _n, which can be extended trivially to a one-dimensional representation of the upper triangular Borel subgroup B_n. We set
(σ_1,⋯,σ_n)=ind^_n_B_n(σ_1,⋯,σ_n),
which is the unitary induction as in <cit.>. According to <cit.>, we have
The irreducible admissible representations of _n=_n() can be classified as follows,
(1) if the parameters t_j of (σ_1,⋯,σ_n) satisfies
t_1≥ t_2≥⋯≥ t_n,
then (σ_1,⋯,σ_n) has a unique irreducible quotient (σ_1,⋯,σ_n),
(b) the representations (σ_1,⋯,σ_n) exhaust the irreducible admissible representations of G_n, up to infinitesimal equivalence,
(c) two such representations (σ_1,⋯,σ_n) and (σ_1^',⋯,σ_n^') are infinitesimally equivalent if and only if there exists a permutation j of {1,⋯,n} such that σ_i^'=σ_j(i) for 1≤ i≤ n.
According to <cit.>, the associated local factors can be expressed as follows.
Let π=(σ_1,⋯,σ_n) be an irreducible admissible representation of _n=_n() with σ_j=[·]^l_j⊗|·|_^t_j, where l_j∈ and t_j∈ for every 1≤ j≤ n. The local L-factor and local ϵ-factor associated with π are given by
L(s,π)=∏_j=1^n 2(2π)^-(s+t_i+|l_j|/2)Γ(s+t+|l_j|/2)
and
ϵ(s,π,ψ)=∏_j=1^ni^|l_j|
For any m∈, the local γ-factor associated with π is given by
γ(1-s,π×[·]^m,ψ) =ϵ(1-s,π×[·]^m,ψ)L(s,π×[·]^-m)/L(1-s,π×[·]^m)
=∏_j=1^n i^|l_j+m|· (2π)^1-2(s-t_j)·Γ(s-t_j+|l_j+m|/2)/Γ(1-s+t_j+|l_j+m|/2).
Using notations in <cit.>, we have that
γ(1-s,π×[·]^m,ψ)=G_(𝐭,𝐥+m𝐞^𝐧)(s),
where 𝐭=(t_1,⋯,t_n)∈^n, 𝐥=(l_1,⋯,l_n)∈^n, and 𝐞^𝐧=(1,⋯,1).
In <cit.>, Z. Qi defines a Bessel kernel function j_𝐭, for any (,)∈^n×^n by the following Mellin-Barnes type integral,
j_,(x)=1/2π i∫__(,) G_(,)(s)x^-2s s,
where
G_,(s):=∏_j=1^ni^|l_j|(2π)^1-2(s-t_j)Γ(s-t_j+|l_j|/2)/Γ(1-(s-t_j)+|l_j|/2)
and _(,) is any contour such that
* 2·_, is upward directed from σ-∞ to σ+∞, where σ<1+1/n( (∑_j=1^n t_j ) -1 ),
* all the set t_j-|l_j|- lie on the left side of 2·_(,), and
* if s∈2·_(,) and | s| large enough, then s=σ.
For more details, we refer to <cit.>.
Then <cit.> defines
J_,(z)=1/2π∑_m∈j_(,+m𝐞^𝐧)(|z|_^1/2)[z]^m,
and <cit.> secures the absolute convergence of this series. The following is the analogy in the complex case of Proposition <ref>.
For any π∈Π_(_n), which is parameterized by π=π(𝐭,𝐥) as in Theorem <ref>,
as distributions on ^×, the following identity
k_π,ψ(z)=J_(𝐭,𝐥)(z)|z|_^1/2
holds for any z∈^×.
According to <cit.>, for any ϕ∈_c^∞(^×), there is a unique function Υ(z)|z|_^1/2∈𝒮_sis^(-,-)(^×
), which is contained in the space (^×) as defined in <cit.>, such that
(1-s,ϕ|·|_^1/2,[·]^m)γ(1-s,π×[·]^m,ψ)=(s,Υ|·|_^1/2,[·]^-m).
It follows that Υ|·|_^1/2=_π,ψ(ϕ |·|_^1/2) according to <cit.>.
From <cit.>, we have that
Υ(z)=∫_^×ϕ(y)J_(𝐭,𝐥)(zy) y.
On the other hand, we have that
Υ(z)=∫_^×ϕ(y)k_π,ψ(yz)|yz|_^-1/2 y
due to <cit.>. The π-kernel function k_π,ψ is a smooth function on ^× according to <cit.>, while
the function J_, is real analytic on ^× due to <cit.>. Since ϕ∈_c^∞(^×) is arbitrary,
we thus deduce that
k_π,ψ(z)=J_(𝐭,𝐥)(z)|z|_^1/2
for any z∈^×, as functions on ^×.
As in Definition <ref>, we introduce the π-Beesel function on ^×.
For any π∈Π_(_n), which is generic, the π-Beesel function _π,ψ(x) on ^× is given as
_π,ψ(x)=J_(𝐭,𝐥)(x)
for any x∈^×, where π=π(𝐭,𝐥) is given by the classification in Theorem <ref>, and J_(𝐭,𝐥)(x) is
given in (<ref>) and was originally defined in <cit.>.
§.§ π-Bessel functions: real case
If F=, we recall from <cit.> the classification of irreducible admissible representations of _n=_n(). For any l≥ 1, let D_l^+ be the discrete series of _2(), that is, the representation space consists of analytic functions f in the upper half-plane with
f^2:=∬ |f(z)|^2y^l-1 x y
finite, and the action of g=[ a b; c d ] is given by
D_l^+(g)f(z):=(bz+d)^-(l+1)f(az+c/bz+d).
Let _2^±() be the subgroup of elements g in _2() with | g|=1 and
D_l:=ind__2()^_2^±()(D_l^+)
be the induced representation of _2^±(), where we still use the unitary induction as in <cit.>. For each pair (l,t)∈_≥ 1×, let σ=σ(l,t) be the representation of _2() obtained by tensoring the above representation on ^±() with the quasi-character g↦| g|^t, that is
σ=D_l⊗|(·)|^t,
where |·|=|·|_.
For a pair (δ,t)∈ℤ/2ℤ×, let σ=σ(δ,t) be the representation of _1()=^×
σ=^δ⊗|·|^t.
For any partition of n into 1's and 2's, say (n_1,⋯,n_r) with each n_j equal to 1 or 2 and with ∑_j=1^r n_j=n, we associate the block diagonal subgroup
M=_n_1()×⋯×_n_r().
For each 1≤ j≤ r, let σ_j be the representation of _n_j() of the form σ(l_j,t_j) or σ(δ_j,t_j) as defined above. We extend the tensor product of these representations to the corresponding block upper triangular subgroup Q by making it the identity on the block strictly upper triangular subgroup. We set
(σ_1,⋯,σ_r):=ind_Q^_n(σ_1,⋯,σ_r).
The irreducible admissible representations of _n=_n() can be classified as follows.
(1) if the parameters n_j^-1t_j of (σ_1,⋯,σ_r) satisfy
n_1^-1 t_1≥ n_2^-1 t_2≥⋯≥ n_r^-1 t_r,
then (σ_1,⋯,σ_r) has a unique irreducible quotient (σ_1,⋯,σ_r),
(1) the representations (σ_1,⋯,σ_r) exhaust the irreducible admissible representations of _n, up to infinitesimal equivalence,
(3) two such representations (σ_1,⋯,σ_r) and (σ_1^',⋯,σ_r^') are infinitesimally equivalent if and only if r^'=r and there exists a permutation j(i) such that σ_i^'=σ_j(i) for each 1≤ i≤ r.
According to <cit.> again, the local factors can be expressed as follows:
For a representation σ of _1() or _2() as defined above, denote
L(s,σ) = {[ π^-s+t+δ/2Γ(s+t+δ/2) n=1, σ=^δ⊗|·|^t,; 2(2π)^-(s+t+l/2)Γ(s+t+l/2) n=2, σ=D_l⊗|(·)|^t, ].
then for π=(σ_1,⋯,σ_r), we have
L(s,π)=∏_j=1^r L(s,σ_j).
Similarly denote
ϵ(s,σ,ψ) = {[ i^δ n=1, σ=^δ⊗|·|^t,; i^l+1 n=2, σ=D_l⊗|(·)|^t, ].
then the ϵ-factor of π=(σ_1,⋯,σ_r) is given by
ϵ(s,π,ψ)=∏_j=1^lϵ(s,σ_j,ψ).
Finally, the local γ-factor associated with π=(σ_1,⋯,σ_r) is given by
γ(s,π×^δ,ψ)=ϵ(s,π×^δ,ψ)L(1-s,π×^δ)/L(s,π×^δ),
where π is the contragredient of π.
For any ϕ(x)∈_c^∞(^×), according to <cit.>, there is some function Υ such that Υ|·|^1/2=_π,ψ(ϕ|·|^1/2) such that
(s,Υ|·|^1/2,^δ)=γ(1-s,π×δ,ψ)·(1-s,ϕ|·|^1/2,^δ).
Due to <cit.>, for s=σ_0 large enough, we have
Υ(x)
=
1/2∑_δ∈/2( 1/2π i∫_σ_0-i∞^σ_0+i∞γ(1-s,π×^δ,ψ)·(1-s,ϕ|·|^1/2,^δ)|x|^-s s)( x)^δ
=1/2∑_δ∈/2( 1/2π i∫_σ_0-i∞^σ_0+i∞γ(1-s,π×^δ,ψ)·∫_^×ϕ(y)|y|^-s y |x|^-s s)( x)^δ.
We choose a contour with the following three properties:
(1) is upward directed from σ_0^'-i∞ to σ_0^'+i∞, where σ_0^' is small enough, say
σ_0^'< 1/2+(∑_j=1^r n_jt_j)-1 /n,
(2) The sets t_j-δ_j- for n_j=1 and t_j-l_j/2- for n_j=2, 1≤ j≤ r all lie on the left side of , and
(3) if s∈, then for | s| large enough, s=σ_0^'.
Then for t=| s| large enough, we have that for fixed x∈^× and ϕ∈_c^∞(^×),
∫_^×ϕ(y)|y|^-s y |x|^-s≤ C
for some constant C for all s with σ_0^'≤ s≤σ_0, and the constant C only depends on x, φ, σ_0, and σ_0^', and is independent of t=| s|. It follows that
∫_σ_0^'+it^σ_0+itγ(1-s,π×^δ,ψ) ∫_^×ϕ(y)|y|^-s y|x|^-s s≤ C^' t^-1
for some other constant C^' according to <cit.> and Property (1) of the contour . So as t→∞, the above integral goes zero, and we are able to change the integral from (σ_0-i∞,σ_0+i∞) to according to the Cauchy residue theorem and Property (2) of the contour , that is
∑_δ∈/2( 1/2π i∫_σ_0-i∞^σ_0+i∞γ(1-s,π×^δ,ψ)·∫_^×ϕ(y)|y|^-s y |x|^-s s)( x)^δ
=∑_δ∈/2( 1/2π i∫_γ(1-s,π×^δ,ψ)·∫_^×ϕ(y)|y|^-s y |x|^-s s)( x)^δ.
According to Property (1) of the contour and <cit.> again, we have that
∫_∫_^× |γ(1-s,π×^δ,ψ)ϕ(y)|·|xy|^-s y s<∞.
Hence we can change the order of integration using Fubini's theorem to obtain that
Υ(x)
=
∫_^×ϕ(y)(1/2∑_δ∈/21/2π i∫_γ(1-s,π×^δ,ψ)|xy|^-s(x)^δ s ) y.
As in Definitions <ref> and <ref>, we define the π-Bessel function _π,ψ(x) on ^× as follows
For any π∈Π_(_n), which is generic, the π-Beesel function _π,ψ(x) on ^× is given as
_π,ψ(± x)=1/2∑_δ∈/21/2π i∫_γ(1-s,π×^δ,ψ)|x|^-s(±)^δ s, x>0.
The integral in Definition <ref> is absolutely convergent to a smooth function in x because of Property (1) of the contour and <cit.>.
Moreover we prove the following proposition, which is the analogy in the real case of Propositions <ref> and <ref>.
For any π∈Π_(_n), which is generic, the π-kernel function k_π,ψ(x) and the π-Bessel function _π,ψ(x) are related by the following
identity as functions on ^×, i.e.
k_π,ψ(x)=_π,ψ(x)|x|_^1/2,
for any x∈^×.
Similar to Proposition 4.9, let us compare the integral
Υ(x)=∫_^×ϕ(y)_π,ψ(xy) y
with the integral
Υ(x)=∫_^×ϕ(y)k_π,ψ(xy)|xy|_^-1/2 y,
for any ϕ∈^∞_c(^×).
It is clear that k_π,ψ=_π,ψ(x)|x|_^1/2 because of Definition <ref> and the smothness of both k_π,ψ (<cit.>) and _π,ψ as functions on ^×.
In the special case that π=π(,δ)=(σ_1,⋯,σ_n) as Theorem <ref> with all n_j=1 for 1≤ j≤ n, the π-Bessel function _π,ψ in Definition <ref> is exactly the Bessel function J_(,δ) defined in <cit.>, where =(t_1,⋯,t_n)∈^n and δ=(δ_1,⋯,δ_n)∈(/2)^n.
§.§ π-Bessel functions and dual functions
From Definitions <ref>, <ref> and <ref>, for a given π∈Π_F(_n), we define the (normalized) π-Bessel function _π,ψ(x) on F^× for every local field F of characteristic zero. In Propositions <ref>, <ref> and <ref>, we obtain the relation between the π-kernel function
k_π,ψ(x) and the π-Bessel function _π,ψ(x). As a record, we state the corresponding formula for the dual function w(x) of w(x)∈^∞_c(F^×) following Corollary <ref>
For any π∈Π_F(_n), the dual function w(x) associated with any w∈^∞_c(F^×) is given by the following formula:
w(x)=|x|_F^n-1/2(_π,ψ(·)*(w^∨(·)|·|_F^n-3/2))(x),
for all x∈ F^×, where w^∨(x)=w(x^-1).
§ A NEW PROOF OF THE VORONOI SUMMATION FORMULA
In this section, we give a new proof of the Voronoi summation formula based on the π-Poisson summation formula (<cit.>), which was recalled in Theorem <ref>.
Let k be a number field, the notations are all as in Section <ref>.
At any local place ν of k, for any w_ν(x)∈_c^∞(k_ν^×) and ζ_ν∈ k_ν^×, the function
ϕ_ν(x)=ψ_ν(xζ_ν)· w_ν(x)·|x|_ν^1-n/2
belongs to the space _π_ν(k_ν^×). If ν<∞ and π_ν is unramified, let ^∘W_ν be the normalized unramifield Whittaker function
associated with π_ν, then the function
φ_ν(x)=ψ_ν(xζ_ν)·^∘W_ν( [ x ; _n-1 ])·|x|_ν^1-n/2
belongs to the space _π_ν(k_ν^×).
The first claim is trivial because for any w_ν(x)∈_c^∞(k_ν^×), we have that
ϕ_ν(x)=ψ_ν(xζ_ν)w_ν(x)|x|_ν^1-n/2∈_c^∞(k_ν^×)⊂_π_ν(k_ν^×).
As for the second claim, since ν<∞, we observe that for any given ζ_ν∈ k_ν^×, ψ_ν(xζ_ν)=1 if |x|_ν is small enough.
Hence we have that
φ_ν(x)=ψ_ν(xζ_ν)·^∘W_ν( [ x ; _n-1 ])·|x|_ν^1-n/2
shares the same asymptotic behavior as |x|→ 0 with the function
^∘ W_ν( [ x ; _n-1 ])|x|_ν^1-n/2,
which belongs to the space _π_ν(k_ν^×) by Proposition <ref>. Hence we must have the function
φ_ν(x)=ψ_ν(xζ_ν)·^∘W_ν( [ x ; _n-1 ])·|x|_ν^1-n/2
belonging to the space _π_ν(k_ν^×).
Let ν be a finite place such that both π_ν and ψ_ν are unramified, the π_ν-basic function _π_ν(x)∈_π_ν(k_ν^×)
as defined in <cit.> enjoys the following formula:
_π_ν(x)=^∘W_ν( [ x ; _n-1 ])|x|_ν^1-n/2.
By <cit.>, the Mellin transform of the π_ν-basic function _π_ν(x) equals L(s,π×χ), and the same happens to the function
^∘W_ν( [ x ; _n-1 ])|x|_ν^1-n/2
by the Rankin-Selberg convolution for _n×_1 in <cit.>. Hence the two functions are equal by the Mellin inversion, following the same argument in the proof of
Proposition <ref>.
For the finite places where ψ_ν and π_ν are unramified, ψ_ν(x)_π_ν(x)=_π_ν(x).
According to <cit.>, the π_ν-basic function _π_ν is supported in _ν∖{0}. The assertion follows clearly.
Now we are ready to prove Theorem <ref> by using Theorem <ref>. Recall that S is the finite set of local places of k that contains all the Archimedean places and
those local places ν where either π_ν or ψ_ν is ramified. For any ζ∈^S, we take
w(·)
:=
^∘W^S( [ · ; _n-1 ])∏_ν∈ Sw_ν(·)
=
^∘W^S( [ · ; _n-1 ])w_S(·)
and
ϕ(·):=ψ^S(·ζ)w(·)|·|_^1-n/2.
Then the function ϕ(x) belongs to the space _π(^×) according to Lemmas <ref>, <ref> and <ref>.
It is clear that the function ϕ(x) is factorizable: ϕ(x)=∏_νϕ_ν(x_ν).
In order to use Theorem <ref> in the proof, we calculate its local π_ν-Fourier transform of ϕ_ν at each place ν. Let R=R_ζ be as in Theorem <ref>.
For the unramified paces ν∉ R∪ S, by Lemma <ref>, we obtain that
ϕ_ν(x_ν)=ψ_ν(x_νζ_ν)·^∘W_ν([ x_ν ; _n-1 ])|x_ν|_ν^1-n/2=ψ_ν(x_νζ_ν)_π_ν(x_ν).
By <cit.> (or Lemma <ref>), if _π_ν(x_ν)≠ 0, then x_ν∈_ν∖{0}.
Since |ζ_ν|_ν≤ 1 when ν∉ R, we obtain that ψ_ν(x_νζ_ν)=1 if _π_ν(x_ν)≠ 0. Hence we deduce that
ϕ_ν(x_ν)=_π_ν(x_ν).
Applying the π_ν-Fourier transform to the both sides, we obtain that
_π_ν,ψ_ν(ϕ_ν)(x_ν)
= _π_ν,ψ_ν(_π_ν)(x_ν)
=_π_ν(x_ν)
=^∘W_ν([ x_ν ; _n-1 ])|x_ν|_ν^1-n/2
according to <cit.>. Note that _π_ν the basic function in the π_ν-Schwartz space _π_ν(k_ν^×) and ^∘W_ν∈(π_ν,ψ^-1_ν), the Whittaker model of π_ν.
At the local places ν∈ S, the function ϕ_ν(x_ν) takes the following form
ϕ_ν(x_ν)=w_ν(x_ν)|x_ν|_ν^1-n/2.
By Proposition <ref>, we obtain that
_π_ν,ψ_ν(ϕ_ν)(x_ν)
= _π_ν,ψ_ν(w_ν(·)|·|^1-n/2)(x_ν)
=w_ν(x_ν)|x_ν|_ν^1-n/2.
Finally, at the local places v∈ R, since R is disjoint to S, the function ϕ_ν takes the following form
ϕ_ν(x_ν)=ψ_ν(x_νζ_ν)·^∘W_ν([ x_ν ; _n-1 ])|x_ν|_ν^1-n/2
with |ζ_ν|_ν>1.
Recall from Section <ref> that α_i,i+1 be the simple root for the root system Φ with respect to (_n,B_n,T_n). The one-parameter subgroups
associated with α_1,2 and α_2,1 are given by
χ_α_1,2(u):=[ 1 u ; 1 ; _n-2 ] and χ_α_2,1(u):=[ 1 ; u 1 ; _n-2 ].
Then the function ϕ_ν can be written as
ϕ_ν(x_ν)=^∘W_ν([ x_ν ; _n-1 ]χ_α_1,2(ζ_ν))|x_ν|_ν^1-n/2
=W_ζ_ν([ x_ν ; _n-1 ])|x_ν|_ν^1-n/2
where W_ζ_ν(g):=^∘W_ν(gχ_α_1,2(ζ_ν)).
It is clear that W_ζ_ν∈(π_ν,ψ_ν).
By Proposition <ref>, the π_ν-Fourier transform of ϕ_ν is give by
_π_ν,ψ_ν(ϕ_ν)(x_ν)
=|x_ν|_ν^1-n/2∫_k_ν^n-2(π_ν(w_n.1)W_ζ_ν)
([ x_ν ; y _n-2 ; 1 ]) y.
Since
π_ν(w_n.1)W_ζ_ν([ x ; y _n-2 ; 1 ])
= W_ζ_ν([ x ; y _n-2 ; 1 ]w_n,1)
=
^∘ W_ν(w_n[ x ; y _n-2 ; 1 ]^-tw_n,1^-tχ_α_1,2(ζ_ν))
where W(g)=W(w_ng^-t) with g^-t:=^tg^-1, we obtain that
π_ν(w_n.1)W_ζ_ν([ x ; y _n-2 ; 1 ])
=
^∘ W_ν([ x ; y _n-2 ; 1 ]w_n,1χ_α_2,1(-ζ_ν)).
Hence the π_ν-Fourier transform of ϕ_ν can be written as
_π_ν,ψ_ν(ϕ_ν)(x_ν)
=
|x_ν|_ν^1-n/2∫_k_ν^n-2^∘ W_ν([ x_ν ; y _n-2 ; 1 ]w_n,1χ_α_2,1(-ζ_ν)) y
=
|x_ν|_ν^1-n/2∫_k_ν^n-2^∘ W_v (
[ x_ν ; _n-2 ; 1 ][ 1 ; y _n-2 ; 1 ] w_n,1χ_α_2,1(-ζ_ν) ) y.
By the explicit computation of the last integral in <cit.>, we obtain that
_π_ν,ψ_ν(ϕ_ν)(x_ν)
=
|x_ν|_ν^1-n/2_ν(x_ν,ζ_ν,^∘W_ν).
Thus, by (<ref>), (<ref>), and (<ref>), we obtain a formula for the π-Fourier transform of ϕ, which is the product of the local π_ν-Fourier transform of ϕ_ν at all local places ν.
Let ϕ∈_π(^×) be the function as defined in (<ref>). The π-Fourier transform of ϕ can be explicitly written as
_π,ψ(ϕ)(x)
=
∏_ν_π_ν,ψ_ν(ϕ_ν)(x_ν)
=
|x|_^1-n/2_R(x,ζ,^∘ W_R) ^∘ W^S∪ R([ x; _n-1 ])w_S(x),
where the Kloosterman integral is given by
_R(x,ζ,^∘ W_R)=∏_ν∈ R_ν(x_ν,ζ_ν,^∘W_ν).
Finally we write the summation on the one side as
∑_α∈ k^×ϕ(α)
=
∑_α∈ k^×ψ^S(αζ) ^∘W^S( [ α ; _n-1 ])w_S(α)
and that on the other side as
∑_α∈ k^×_π,ψ(ϕ)(α)
=
∑_α∈ k^×_R(α,ζ,^∘ W_R) ^∘ W^S∪ R([ α; _n-1 ])w_S(α),
because |α|_=1 for every α∈ k^×.
By the π-Poisson summation formula in Theorem <ref>, which is
∑_α∈ k^×ϕ(α)=∑_α∈ k^×_π,ψ(ϕ)(α),
we deduce the Voronoi formula in Theorem <ref>:
∑_α∈ k^×ψ^S(αζ) ^∘W^S( [ α ; _n-1 ])w_S(α)
=
∑_α∈ k^×_R(α,ζ,^∘ W_R) ^∘ W^S∪ R([ α; _n-1 ])w_S(α).
This completes our new proof of the Voronoi summation formula for _n (Theorem <ref>).
In <cit.>, A. Corbett extends the Voronoi formula in Theorem <ref> to a more general situation by allowing the local component ϕ_ν at
ν∈ R to be more general functions in _π_ν(k_ν^×). More precisely, if one take
ϕ_ν(x)=ψ_ν(x ζ_ν)· w_ν(x)· |x|_ν^1-n/2
for ν∈ S and
ϕ_ν(x)=ψ_ν(xζ_ν)· W_ν( [ x ; _n-1 ]ξ)·|x|_ν^1-n/2
for ν∉ S, where w_ν∈_c^∞(F^×), W_ν∈(π_ν,ψ_ν) and S, ζ, ξ are as in <cit.>,
then according to Lemmas <ref>, <ref> and <ref>, the function ϕ:=⊗_νϕ_ν∈_π(^×).
It is not hard to figure out that the proof of Proposition <ref> works for such special choices of functions ϕ as well. In particular, we obtain from
Proposition <ref> that at each local place ν, the Fourier transform _π_ν,ψ_ν(ϕ_ν) is equal to
the function ℌ_ν(x;ζ_ν,ξ_ν)|x|^1-n/2 in <cit.>. The extended Voronoi formula for _n proved by Corbett
in <cit.> by using the Rankin-Selberg convolution for _n×_1, can be deduced by the same argument as in our proof of Theorem <ref> from the
π-Poisson summation formula in <cit.>. Hence the extended Voronoi formula for _n in <cit.> is also a special case of
the π-Poisson summation formula in <cit.>. We omit further details.
§ ON THE GODEMENT-JACQUET KERNELS
For any π∈_(_n), the goal of this section is to define the Godement-Jacquet kernels for L_f(s,π) and their dual kernels, and to
prove the π-versions of <cit.>, which can be viewed as the case of n=1 and is recalled in Theorem <ref>.
§.§ Godement-Jacquet kernel and its dual
We recall from <cit.> the global
zeta integral for the standard L-function L(s,π) as stated in (<ref>) is
(s,ϕ)=∫_^×ϕ(x)|x|_^s-1/2^× x
for any ϕ∈_π(^×).
By <cit.> the zeta integral (s,ϕ) converges absolutely for (s)>n+1/2, admits analytic continuation to an entire function in s∈, and satisfies the functional equation
(s,ϕ)=(1-s,_π,ψ(ϕ)).
where _π,ψ is the π-Fourier transform as defined in (<ref>). As explained in <cit.>, this is a reformulation of the Godement-Jacquet theory for the
standard L-functions L(s,π).
Consider the fibration through the idele norm map |·|_:
1→^1→^×→^×_+→1
where ^×_+={x∈^× x>0} and ^1={x∈^× |x|_=1}.
One can have a suitable Haar measure ^× on ^1 that is compatible with the Haar measures ^× x
on ^× and the Haar measure ^× t on ^×_+=^×/^1. Write ^×=_∞^××_f^×, where _∞^×=∏_ν∈|k|_∞k_ν^×, and _f^× is the subset of ^× consisting of elements (x_ν)∈^× with x_ν=1 for all ν∈|k|_∞.
When (s)>n+1/2, the absolutely convergent zeta integral (s,ϕ) as in (<ref>) can be written as
∫_^×ϕ(x)|x|_^s-1/2^×x
=∫_1^∞∫_^1ϕ(t)t^s-1/2^×^×t+∫_0^1∫_^1ϕ(t)t^s-1/2^×^×t.
The first integral on the right-hand side of (<ref>):
∫_1^∞∫_^1ϕ(t)t^s-1/2^×^×t
converges absolutely at any s∈ and is holomorphic as a function in s∈, for any ϕ∈_π(^×).
Let ϕ_f=⊗_νϕ_ν∈_π_f(_f^×) be a factorizable π-Schwartz function. Let S=S(π,ψ,ϕ_f) be a finite subset S of |k|=|k|_∞∪|k|_f (the set of all local places of k) that contains |k|_∞ and such that for any ν∉ S both π_ν and ψ_ν are unramified and ϕ_ν=_π_ν, the basic function in _π_ν(k_ν^×) as in <cit.>. Write
S_f=S∩|k|_f={ν_1,ν_2,⋯,ν_κ}.
According to <cit.>, there is a positive real number s_π, which depends only on the given π∈_(_n),
such that for any real number a_0>s_π, the limit
lim_|x|_ν→ 0ϕ_ν(x)|x|_ν^a_0=0
holds for every ϕ_ν∈_π_ν(k_ν^×) and for every ν∈|k|. From the definition of the π_ν-Schwartz space _π_ν(k_ν^×) in (<ref>) and <cit.>, we know that ϕ_ν(x)=0 when |x|_ν is large enough for all ν<∞. Hence for every ν∈ S_f, there is a constant C_ν>0 such that
|ϕ_ν(x)|≤ C_ν|x|_ν^-a_0.
By <cit.>, there is a positive real number b_π>s_π>0, which also depends only on the given π, such that for any b_0>b_π, we have that
|_π_ν(x)|≤|x|_ν^-b_0
holds for every ν∉ S.
It is clear that for any constant c>b_π and constant C_1 with max_ν∈ S_f{C_ν,1}≤ C_1, we must have that the inequality:
|ϕ_f(x_f)|≤ C_1|x_f|_^-c
holds for every x_f∈^×_f.
We first estimate the inner integral, which can be written as
∫_^1 |ϕ(t)|^×=∫_^1/k^×∑_γ∈ k^×|ϕ(γ t)|^×.
Fix a ν_0∈|k|_∞ and a section _+^×→ k_ν_0^×↪^× of the norm map ^×→^×_+ and view
t∈_+^× as the ν_0-component of ^×.
Define
ℓ:^1∩( _∞^××_f^×)→^r, ↦ (⋯,log ||_ν,⋯)_ν∈ |k|_∞-{ν_0}
where _f^×=∏_ν<∞_ν^×, and r:=r_1+r_2-1 with r_1 being the number of real places and r_2 the number of complex ones.
Let {ϵ_i}_1≤ i≤ r be a basis for the group of units in the ring of integers in k modulo the group of roots of unity in k, and set
P={∑_i=1^r x_iℓ(ϵ_i) 0≤ x_i< 1, ∀ 1≤ i≤ r } and
E_0={∈ℓ^-1(P) 0≤_ν_0<2π/ħ_k}
where ħ_k is the class number of k. We choose representatives ^(1),⋯,^(ħ_k) of idele classes, and define E:=∪_i=1^ħ_k E_0^(i). Then E is a fundamental domain of k^×\^1 according to <cit.>, which is compact. Hence we can write (<ref>) as
∫_^1 |ϕ(t)|^×=∫_E ∑_γ∈ k^×|ϕ(γ t)|^×.
Without loss of generality, we may take ϕ=ϕ_∞⊗ϕ_f∈_π(^×)=_π_∞(_∞^×)⊗_π_f(_f^×).
Write t=(α_∞,α_f)∈^×=_∞^××_f^×. By (<ref>), we have that
|ϕ(γ t)|
=|ϕ_∞(γα_∞)·ϕ_f(γα_f)|≤ C_1|ϕ_∞(γα_∞)|·|γα_f|_f^-c
for any constant c>b_π. Since ∈^1, we must have that
|γα_f|_f^-c=|γα_∞|_∞^c·|γ(α_∞,α_f)|_^-c
=
|γα_∞|_∞^c·|γ t|_^-c
=
|γα_∞|_∞^c·|t|_^-c
=
|γα_∞|_∞^c· t^-c.
Hence we obtain
|ϕ(γ t)|≤ C_1|ϕ_∞(γα_∞)|·|γα_∞|_∞^c· t^-c.
Since belongs to a compact set E, the Archimedean part of belongs to a compact subset of _∞^×. Hence there is a constant C_2 such that
∑_γ∈ k^×|ϕ(γ t)|≤ C_2· t^-c·∑_γ∈ k^× |ϕ_∞(γ t)|·|γ t|_∞^c.
For ϕ_∞∈_π_∞(_∞^×), we know from <cit.> that
ϕ_∞(x)|x|_∞^c
for any constant c is of rapid decay as |x|_∞→∞. From the choice of the fundamental domain E, we must have that α_f∈_f^×. Due to <cit.>, there are integers e_1,⋯,e_κ such that for γ∈ k^×, if ϕ(γ t)≠ 0, then γ∈:=_1^e_1⋯_κ^e_κ. According to <cit.>, the image of in ^×_∞ is a lattice, and there is a constant C_3 such that the (partial) theta series
∑_γ∈ k^× |ϕ_∞(γ t)|·|γ t|_∞^c≤ C_3.
Thus we obtain that
∑_γ∈ k^×|ϕ(γ t)|≤ C_2C_3 t^-c,
and there is a constant C_4 such that
∫_^1|ϕ(t)|≤ C_4 t^-c.
It follows that the integral
∫_1^∞∫_^1ϕ(t)t^s-1/2^×t
converges absolutely as long as (s)<c+1/2 for any c>b_π.
Since c is arbitrarily large with c>b_π, we obtain that the integral
∫_1^∞∫_^1ϕ(t)t^s-1/2^×^× t
converges absolutely for any s∈ and hence is holomorphic as a function in s∈.
Since a general element in _π(^×) is a finite linear combination of the factorizable functions, it is clear that the above statement for the integrals
hold for general ϕ∈_π(^×).
From the above proof, we also obtain
For any t∈^×_+, the inner integral
∫_^1ϕ(t)^×
always converges absolutely for any ϕ∈_π(^×).
By using the π-Poisson summation formula (Theorem <ref>), we obtain
For any t∈^×_+, the following identity
∫_^1ϕ(t)t^s^×=∫_^1_π,ψ(ϕ)(t^-1)t^s^×
holds for any ϕ∈_π(^×).
From (<ref>), the integral
∫_^1ϕ(t)t^s^×
converges absolutely for any t∈^×_+ and for any ϕ∈_π(^×).
We write
∫_^1ϕ(t)t^s^× =∑_α∈ k^×∫_α Eϕ(t)t^s^×
=∑_α∈ k^×∫_E ϕ(α t)t^s^×
=∫_E ( ∑_α∈ k^×ϕ(α t) )t^s^×
where E is the fundamental domain of k^× in ^1 as above, which is compact. By the π-Poisson summation formula in Theorem <ref>:
∑_α∈ k^×ϕ(α t) = ∑_α∈ k^×_π,ψ(ϕ)(α/t),
we obtain that
∫_^1ϕ(t)t^s^×
=∫_E (∑_α∈ k^×_π,ψ(ϕ)(α/t))t^s^×
=∫_^1_π,ψ(ϕ)(t^-1)t^s^×,
where all changes of the order of integrations are verified due to the absolute convergence.
Applying Proposition <ref> to the second integral on the right-hand side of (<ref>), we obtain that for (s)>n+1/2,
∫_0^1∫_^1ϕ(t)t^s-1/2^×^×t
=
∫_0^1∫_^1_π,ψ(ϕ)(t^-1)t^s-1/2^×^×t
=
∫_1^∞∫_^1_π,ψ(ϕ)(t)t^1/2-s^×^×t.
By Proposition <ref> and (<ref>), we obtain the following
The second integral in the right-hand side of (<ref>)
∫_0^1∫_^1ϕ(t)t^s-1/2^×^×t
converges absolutely for (s)>n+1/2 and has analytic continuation to an entire function in s∈. Moreover, the following identity
∫_0^1∫_^1ϕ(t)t^s-1/2^×^×t
=
∫_1^∞∫_^1_π,ψ(ϕ)(t)t^1/2-s^×^×t
holds by analytic continuation for s∈, where the integral on the right-hand side converges absolutely for all s∈.
Set ^>1:={x∈^× |x|_>1}. By combining (<ref>) with (<ref>), we obtain that when (s)>n+1/2
∫_^×ϕ(x)|x|_^s-1/2^×x
=∫_^>1ϕ(x)|x|_^s-1/2^×x+∫_^>1_π,ψ(ϕ)(x)|x|_^1/2-s^×x,
which holds for all s∈ by analytic continuation.
From the proof of Proposition <ref>, both integrals on the right-hand side converge absolutely when s∈ belongs to the vertical strip
1/2-c<(s)<1/2+c for any constant c with c>max{b_π,b_π}.
Hence they converge absolutely at any s∈.
We are going to calculate the following integral in another way:
∫_^>1ϕ(x)|x|_^s-1/2^×x.
For x=(x_ν)∈^×, we write x=x_∞· x_f with x_∞∈_∞^× and x_f∈_f^×. For x∈^>1,
we have that |x|=|x_∞|_·|x_f|_>1 and |x_f|_>|x_∞|_^-1.
For ϕ=ϕ_∞⊗ϕ_f∈_π(^×)=_π_∞(_∞^×)⊗_π_f(_f^×), we write
∫_^>1ϕ(x)|x|_^s-1/2^×x
=∫__∞^×ϕ_∞(x_∞)|x_∞|_^s-1/2^×x_∞∫__f^×^>|x_∞|^-1ϕ_f(x_f)|x_f|_^s-1/2^×x_f,
for any s∈, where the inner integral is taken over { x_f∈_f^× |x_f|_> |x_∞|_^-1}. By the Fubini theorem, we know
(from the proof of Proposition <ref>) that the inner integral
∫__f^×^>|x_∞|_^-1ϕ_f(x_f)|x_f|_^s-1/2^×x_f
converges absolutely for any s∈ and any x_∞∈_∞^×.
[Godement-Jacquet Kernels]
For any π=π_∞⊗π_f∈_(_n), take any ϕ_f∈_π_f(^×), the Godement-Jacquet kernels associated with π are defined to be
H_π,s(x_∞,ϕ_f):=
|x_∞|_^s-1/2∫__f^×^>|x_∞|_^-1ϕ_f(x_f)|x_f|_^s-1/2^× x_f,
for x_∞∈_∞^× and for all s∈.
From (<ref>), we obtain that
∫_^>1ϕ(x)|x|_^s-1/2^×x
=∫__∞^×ϕ_∞(x_∞)H_π,s(x_∞,ϕ_f)^× x_∞.
In the spirit of <cit.>, to each π∈_(_n), we define the dual kernel of the Godement-Jacquet kernel H_π,s(x_∞,ϕ_f) associated with π to be
K_π,s(x_∞,ϕ_f):=
|x_∞|_^s-1/2∫__f^×^>|x_∞|_^-1_π_f,ψ_f(ϕ_f)(x_f)|x_f|_^s-1/2^× x_f,
for x_∞∈_∞^× and for all s∈.
With a suitable choice of the functions ϕ_f, the kernel functions H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f) may have simple expressions. We refer to
Proposition <ref> for details. We establish the distribution property for H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f).
Set _∞:={x_∞∈_∞ |x_∞|_=0} and write
_∞=_∞^×∪_∞.
For any ϕ_f∈_π_f(_f^×) and for any s∈, the Godement-Jacquet kernel function H_π,s(x_∞,ϕ_f) and its dual kernel function K_π,s(x_∞,ϕ_f) on _∞^× enjoy the following properties.
* Both H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f) vanish of infinity order at _∞.
* Both H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f) have unique canonical extension across _∞ to the whole space _∞.
* Both H_π,s(x_∞,ϕ_f) and K_π,s(x_∞,ϕ_f) are tempered distributions on _∞.
By definition, we have that K_π,s(x_∞,ϕ_f)=H_π,s(x_∞,_π,ψ(ϕ_f)).
It is enough to show that Properties (1), (2), and (3) hold for the kernel function H_π,s(x_∞,ϕ_f).
We prove (1) and (2) by using the work of S. Miller and W. Schmid in <cit.> (in particular <cit.>). Then we prove (3)
by showing that H_π,s(x_∞,ϕ_f) is of polynomial growth as the Eucilidean norm of x_∞ tends to ∞ (<cit.>).
Without loss of generality, we may assume that ϕ_f=⊗_νϕ_ν∈_π_f(_f^×) is factorizable.
Let T⊂|k|_f be a finite set such that for ν∉ T, both ψ_ν and π_ν are unramified and ϕ_ν(x)=_k_ν(x), the basic function in
_π_ν(k_ν^×).
According to <cit.>, there are integers {e_ν}_ν∈ T such that the support of ϕ_f is contained in
(∏_ν∈ T(_ν^e_ν∖{0})×∏_ν∉ T(_ν∖{0}))⋂_f^×.
According to (<ref>), for any c>b_π, there is a constant C_1 such that
|ϕ_f(x_f)|≤ C_1|x_f|_^-c
for any x_f∈_f^×. Write
_f^×=_α=(α_ν)(∏_ν∈|k|_fϖ_ν^α_ν_ν^×),
where ϖ_ν is the local uniformizer in k_ν and α runs over the algebraic direct sum ⊕_ν∈|k|_f. Then for x in the α=(α_ν) component, we have |x|_f=∏_ν∈|k|_fq_ν^-α_ν and the inequality: |x|_f≤|x_∞|_ is equivalent to the inequality: ∏_ν∈|k|_fq_ν^α_ν<|x_∞|_.
We may write a fractional ideal in k as ∏_ν_ν^α_ν. We set
=_T:=∏_ν∈ T_ν^e_ν,
which is the fractional ideal depending on the support of ϕ_f.
According to the normalization of our Haar measure on _f^×, we have that
∫__f^×^>|x_∞|_^-1|ϕ_f(x_f)|x_f|_^s-1/2|^×x
≤ C_1·∫__f^×^>|x_∞|_^-1|x_f|_^-c+(s)-1/2^×x_f
≤ C_1·∑_⊂, ()<|x_∞|_()^c+1/2-(s),
where the last summation runs over all fractional ideals of k that are contained in with absolute norm less than or equal to |x_∞|_.
Write =^-1 and obtain that
∑_⊂, ()<|x_∞|_()^c+1/2-(s)
=∑_⊂, ()<|x_∞|_/()()^c+1/2-(s)·()^c+1/2-(s)
Let a(n) be the number of ideals ⊂ with ()=n. According to the Wiener-Ikehara theorem (<cit.>), there is a constant C^' such that
∑_n≤ xa(n)≤ C^'x
for all x≥ 0, and in particular a(n)≤ C^'n. We obtain that
∑_⊂, ()<|x_∞|_/()()^c+1/2-(s)·()^c+1/2-(s) = ()^c+1/2-(s)∑_n≤|x_∞|_/()a(n)n^c+1/2-(s)
≤()^c+1/2-(s)C^'∑_n≤|x_∞|_/()n^c+3/2-(s).
For a fixed s∈, and any fixed c>max{b_π,(s)-3/2}, we have that
∑_n≤|x_∞|_/()n^c+3/2-(s)≤∫_1^|x_∞|_/()+1x^c+3/2-(s) x
≤( |x_∞|_/()+1 ) ^c+5/2-(s)/c+5/2-(s).
When |x_∞|_≥(), we deduce that
( |x_∞|_/() +1 ) ^c+5/2-(s)/c+5/2-(s)≤ 2^c+5/2-(s)()^(s)-c-5/2|x_∞|_^c+5/2-(s)/c+5/2-(s).
Hence we obtain that
∫__f^×^>|x_∞|_^-1|ϕ_f(x_f)|x_f|_^s-1/2^×x_f |
≤2^c+5/2-(s)C_1C^'/(c+5/2-(s))()^2|x_∞|_^c+5/2-(s)
=C|x_∞|_^c+5/2-(s),
for any c>max{b_π,(s)-3/2} and |x_∞|_>(), where
C=2^c+5/2-(s)C_1C^'/(c+5/2-(s))()^2
is a constant depending on k, ϕ_f, s, c, and is independent of |x_∞|_. Moreover, from the above calculation, we obtain that
∫__f^×^>|x_∞|_^-1|ϕ_f(x_f)|x_f|_^s-1/2^×x_f
|=0
if |x_∞|_≤(). From (<ref>), it is clear that
the kernel function H_s,π(x_∞,ϕ_f) vanishes at some neighborhood for any point x_∞∈_∞.
By <cit.>, H_s,π(x_∞,ϕ_f) vanishes of infinity order at _∞ and has a unique canonical extension across _∞ to the whole _∞, which we still denote by H_π,s(x_∞,ϕ_f). We establish Properties (1) and (2).
For Property (3), because of the estimate in (<ref>), the kernel H_π,s(x_∞,ϕ_f) is of polynomial growth as the Eucilidean norm of x_∞ tends to ∞. Hence H_π,s(x_∞,ϕ_F) is tempered as a distribution on _∞ according to <cit.>.
§.§ π_∞-Fourier transform
From (<ref>), we obtain for ϕ=ϕ_∞⊗ϕ_f∈_π(^×) that
∫_^>1_π,ψ(ϕ)(x)|x|^1/2-s^×x
=
∫__∞^×_π_∞,ψ_∞(ϕ_∞)(x_∞)K_π,1-s(x_∞,ϕ_f)^× x_∞,
which converges absolutely for all s∈.
The following is the duality relation of the Godement-Jacquet kernels H_π,s(x_∞,ϕ_f) and K_π,s(x,ϕ_f) via the π_∞-Fourier transform when s∈
is such that L_f(s,π_f)=0, which is part of <cit.> for π∈_(_n).
For any π∈_(_n), take ϕ=ϕ_∞⊗ϕ_f∈_π(^×). Then the Godement-Jacquet kernel H_π,s(x,ϕ_f) associated with π
and its dual kernel K_π,s(x,ϕ_f) enjoy the following identity:
H_π,s(x,ϕ_f)
=-
_π_∞,ψ_∞(K_π,1-s(·,ϕ_f))(x)
=
-∫__∞^×k_π_∞,ψ_∞(x y)K_π,1-s(y,ϕ_f)^×y.
as distributions on _∞^× if s is a zero of L_f(s,π_f), where k_π_∞,ψ_∞ is the π_∞-kernel function as given in (<ref>)
that gives the π_∞-Fourier transform as a convolution integral operator as in (<ref>).
For (s)>n+1/2, we have
(s,ϕ)=∫_^×ϕ(x)|x|_^s-1/2^×x=∏_ν∈|k|_ν(s,ϕ_ν).
By the reformulation of the Godement-Jacquet local theory in <cit.>, we obtain that
∫_^×ϕ(x)|x|_^s-1/2^×x
=
_∞(s,ϕ_∞)· L_f(s,π_f)·∏_ν∈|k|^*_ν(s,ϕ_ν)
where at almost all finite local places ν with ϕ_ν equal to the basic function _π_ν, we have that ^*(s,ϕ_ν)=1, for the remaining finite local
places ν, where
^*_ν(s,ϕ_ν):=_ν(s,ϕ_ν)/L(s,π_ν)
is holomorphic in s∈, and
_∞(s,ϕ_∞):=∏_ν∈|k|_∞_ν(s,ϕ_ν),
which is holomorphic in s∈ if ϕ_∞∈_c^∞(_∞^×).
Hence we obtain that ∏_ν∈|k|^*_ν(s,ϕ_ν) is a finite product of holomorphic functions. From (<ref>), we have
Z(s,ϕ)
=∫_^>1ϕ(x)|x|_^s-1/2^×x+∫_^>1_π,ψ(ϕ)(x)|x|_^1/2-s^×x
for all s∈ by analytic continuation.
Hence we obtain that if s∈ is such that L_f(s,π_f)=0, then we must have that
∫_^>1ϕ(x)|x|_^s-1/2^×x=-∫_^>1_π,ψ(ϕ)(x)|x|_^1/2-s^×x.
Note that by Proposition <ref> both integrals converges absolutely for any s∈.
From (<ref>), we have that
∫_^>1_π,ψ(ϕ)(x)|x|^1/2-s^×x
=
∫__∞^×_π_∞,ψ_∞(ϕ_∞)(x_∞)K_π,1-s(x_∞,ϕ_f)^× x_∞,
which is absolutely convergent according to (<ref>). By <cit.>, which is recalled in (<ref>), there is a π_∞-kernel function k_π_∞,ψ_∞, such that
for any ϕ_∞∈_c^∞(_∞^×)
_π_∞,ψ_∞(ϕ_∞)(x_∞)=(k_π_∞,ψ_∞*ϕ_∞^∨)(x_∞)
=∫__∞^×k_π_∞,ψ_∞(x_∞ y_∞)ϕ_∞(y_∞)^×y_∞.
Since _π_∞,ψ_∞(ϕ_∞)∈_π_∞,ψ_∞(^×_∞), By using the Fubini's theorem and Proposition <ref> again,
we obtain that
∫_^>1_π,ψ(ϕ)(x)|x|^1/2-s^×x
=∫__∞^×∫__∞^×k_π_∞,ψ_∞(x_∞ y_∞)ϕ_∞(y_∞)^×y_∞
K_π,1-s(x_∞,ϕ_f)^× x_∞
=∫__∞^×ϕ_∞(y_∞)
∫__∞^×k_π_∞,ψ_∞(x_∞ y_∞)K_π,1-s(x_∞,ϕ_f)^×x_∞^× y_∞.
By definition as in (<ref>), we write the π_∞-Fourier transform of the dual kernel K_π,1-s(x_∞,ϕ_f), viewed as a distribution on _∞^×,
to be
_π_∞,ψ_∞(K_π,1-s(·,ϕ_f))(y_∞)
=
∫__∞^×k_π_∞,ψ_∞(x_∞ y_∞)K_π,1-s(x_∞,ϕ_f)^×x_∞.
Hence we obtain that
∫_^>1_π,ψ(ϕ)(x)|x|^1/2-s^×x
=
∫__∞^×ϕ_∞(y_∞)_π_∞,ψ_∞(K_π,1-s(·,ϕ_f))(y_∞)^× y_∞.
By combining (<ref>) with (<ref>), we obtain the following identity as distributions on _∞^×
∫__∞^×ϕ_∞(y_∞)_π_∞,ψ_∞(K_π,1-s(·,ϕ_f))(y_∞)^× y_∞
=-
∫__∞^×ϕ_∞(x_∞)H_π,s(x_∞,ϕ_f)^× x_∞
for all ϕ_∞∈_c^∞(_∞^×). Therefore, as distributions on _∞^×, we have that
_π_∞,ψ_∞(K_π,1-s(·,ϕ_f))(x_∞)=-H_π,s(x_∞,ϕ_f).
For any π=π_∞⊗π_f∈_(_n), we write
L_f(s,π_f)=∏_ν<∞L(s,π_ν)
when (s) is sufficiently positive.
The following is an easy consequence of <cit.> and general theory
of Mellin transforms.
For any ν∈|k|_f, there is a function ϕ_ν∈_π_ν(k_ν^×) such that
∫_k_ν^×ϕ_ν(x)|x|_ν^s-1/2 ^×x=L(s,π_ν)
holds as functions in s∈ by meromorphic continuation.
For any ϕ_∞∈_π_∞(_∞^×), take ϕ^⋆=ϕ_∞⊗ϕ_f^⋆, where ϕ_f^⋆:=⊗_νϕ_ν with ϕ_ν as given in Proposition <ref> and ϕ_ν=_ν, the basic function, for almost all ν. It is clear that such a function ϕ belongs to the π-Schwartz space _π(^×). As in (<ref>), the zeta integral
(s,ϕ^⋆)=∫_^×ϕ^⋆(x)|x|_^s-1/2^×x
converges absolutely when (s)>n+1/2 and can be written as
(s,ϕ^⋆)
=(s,ϕ_∞)·(s,ϕ^⋆_f)
=(s,ϕ_∞)· L_f(s,ϕ_f),
where
(s,ϕ^⋆_f)=∏_ν∈|k|_f(s,ϕ_ν)=∏_ν∈|k|_fL(s,π_ν)
when (s)>n+1/2. We set
H_π,s(x) :=H_π,s(x,ϕ^⋆_f)
K_π,s(x) :=K_π,s(x,ϕ^⋆_f),
and call H_π,s(x) the Godement-Jacquet kernel associated with the Euler product L_f(s,π_f), and K_π,s(x) its dual kernel.
For any π∈_(_n), take ϕ^⋆=ϕ_∞⊗ϕ^⋆_f∈_π(^×) with ϕ_f^⋆:=⊗_ν∈|k|_fϕ_ν where ϕ_ν is as given in Proposition <ref>. Then the Godement-Jacquet kernel H_π,s(x) associated with the Euler product L_f(s,π)
and its dual kernel K_π,s(x) enjoy the following identity:
H_π,s(x)
=-
_π_∞,ψ_∞(K_π,1-s)(x)
=
-∫__∞^×k_π_∞,ψ_∞(x y)K_π,1-s(y)^×y.
as distributions on _∞^× if and only if s is a zero of L_f(s,π_f).
By Proposition <ref>, we only need to consider that if (<ref>) holds, then s∈ is such that L_f(s,π_f)=0.
By the choice of ϕ^⋆=ϕ_∞⊗ϕ^⋆_f, we have from Proposition <ref> that
∫_^×ϕ(x)|x|^s-1/2^×x
=
_∞(s,ϕ_∞)· L_f(s,π_f).
From the proof of Proposition <ref>, we deduce that if (<ref>) holds, then we must have that
_∞(s,ϕ_∞)· L_f(s,π_f)=0
for any ϕ_∞∈_c^∞(_∞^×). It is clear that one is able to choose a particular test function ϕ_∞ such that _∞(s,ϕ_∞)≠ 0.
Hence (<ref>) implies that L_f(s,π_f)=0.
§.§ Clozel's theorem for π
In <cit.>, Clozel defines the Tate kernel and its dual kernel associated with the Dirichlet series expression of the Dedekind zeta function ζ_k(s)
of the ground number field k and prove Theorem 1.1 of <cit.> by two methods, one is an approach from the Tate functional equation and the other is more
classical approach from analytic number theory. For a general π∈_(_n), we define in Definition <ref> and (<ref>) we define the Godement-Jacquet kernels
H_π,s(x,ϕ_f) and their dual kernels K_π,s(x,ϕ_f) via the π_f-Fourier transform _π_f,ψ_f associated with the global functional equation
in the reformulation of the Godement-Jacquet theory. By using the testing functions for the local zeta integrals and the local L-factors (L(s,π_ν) at all finite local places
ν∈|k|_f (Proposition <ref>), we obtain the π-version of <cit.> when the kernel functions are related to the L-function L(s,π) with the
Euler product expression (Theorem <ref>). In order to obtain the π-version of <cit.> when the kernel functions are related to the L-function L(s,π) with its Dirichlet series expression, we are going to refine the structure of the testing functions in Proposition <ref> by using the construction in <cit.>.
For ν∈|k|_f, assume that (π_ν,V_π_ν)∈Π_k_ν(_n) is generic. Then there exists a function ϕ_ν∈_π_ν(k_ν^×) such that
_ν(s,ϕ_ν):=∫_k_ν^×ϕ_ν(x)|x|_ν^s-1/2^×x=L(s,π_ν),
the support of ϕ_ν is contained in _ν∖{0}, and ϕ_ν is invariant under the action of _ν^×.
If n=1, then π_ν is a quasi-character of k_ν^×. If π_ν is unramified, it is well-known that one takes ϕ_ν(x)=|x|_ν^1/21__ν(x) with 1__ν the characteristic function of _ν, and has the following identity
_ν(s,ϕ_ν)=∫_k_ν^×ϕ_ν(x)|x|_ν^s-1/2^× x=1/1-π_ν(ϖ_ν)q^-s=L(s,π_ν),
which holds for all s∈ by meromorphic continuation , where ϖ_ν is the uniformizer of k_ν. It is clear that in this case ϕ_ν is supported on k_ν^×∩_ν=_ν∖{0} and is invariant under _ν^×.
If π_ν is rafimied, then we know that L(s,π_ν)=1. We can take
ϕ_ν(x)=
1, if
x∈_ν^×,
0, otherwise.
Then according to our normalization of the Haar measure, we obtain by an easy computation that
_ν(s,ϕ_ν)=∫_k_ν^×ϕ_ν(x)|x|_ν^s-1/2^× x=1=L(s,π_ν).
It is clear that in this case, ϕ_ν is supported in _ν∖{0} and is invariant under the action of _ν^×.
In the following, we assume that n≥ 2.
For each non-negative integer m, we define the congruence subgroup K_0(_ν^m) as in <cit.> to be
K_0(_ν^m):={x=(x_ij)∈_n(_ν) x_n,1,⋯,x_n,n-1∈_ν^m }.
According to the classification of irreducible generic representations and <cit.>, there is a minimal positive integer c(π_ν) for which the vector space
V_π_ν^K_0(_ν^c(π_ν)):={v∈ V_π_ν π_ν(x)v=ω_π_ν(x_n,n)v, ∀ x∈ K_0(_ν^c(π_ν)) }
is non-trivial and in fact of dimension one. Choose v^∘∈ V_π_ν^K_0(_ν^c(π_ν)) and v^∘∈ V_π_ν^K_0(_ν^c(π_ν)), respectively, such that the matrix coefficient
φ_π_ν(g):=⟨π_ν(g)v^∘,v^∘⟩
has value 1 at _n. Since n≥ 2, we may take a Schwartz-Bruhat function f_ν∈(_n(k_ν)) of the form:
f_ν(x)=ω_π_ν^-1(x_n,n)/vol(K_0(_ν^c(π_ν))) if
x∈_n(_ν) with x_n,1,⋯,x_n,n-1∈_ν^c(π_ν) and x_n,n∈_ν^×,
0 otherwise.
Then by <cit.>, when s is sufficiently positive, one has that
∫__n(k_ν)f_ν(g)φ_π_ν(g)| g|^s+n-1/2 g=L(s,π_ν).
According to <cit.>, the fiber integration as defined in (<ref>) yields that
ϕ_π_ν(x)=|x|_ν^n/2∫__n(F)_xf_ν(g)φ_π_ν(g)_xg,
where _n(k_ν)_x is the fiber at x of the determinant map as in (<ref>),
is well defined and when (s) is sufficiently positive, we have that
∫_k_ν^×ϕ_π_ν(x)|x|_ν^s-1/2^×x=L(s,π_ν).
It remains to verify the invariance property for this function ϕ_ν. If x∉_ν, we must have that
_n(k_ν)_x∩_n(_ν)=∅.
Thus for any g∈_n(k_ν)_x with x∉_ν, we must have that f_ν(g)=0. By the fiber integration in (<ref>), we have that ϕ_π_ν(x)=0 when
x∉_ν. Moreover, for any u∈_ν^×, we have
ϕ_π_ν(xu)
=|xu|_ν^n/2∫__n(k_ν)_xuf_ν(g)φ_π_ν(g)_xg
=|x|_ν^n/2∫__n(k_ν)_xf_ν(hu^*)φ_π_ν(hu^*)_xh,
where
u^*:=diag(u,1,1,⋯,1).
Since u^*∈ K_0(_ν^c(π_ν)), one can see at once that f_ν(hu^*)=f_ν(h) and φ_π_ν(hu^*)=φ_π_ν(h). Therefore we obtain that ϕ_π_ν(xu)=ϕ_π_ν(x) for any u∈_ν^×.
For any π∈_(_n), the Godement-Jacquet kernel H_π,s(x) enjoys the following expression:
H_π,s(x)=|x|^s-1/2∑_n≤ |x|a_n n^-s,
as a function in x∈_∞^× for all s∈.
Write
^×_f=_α=(α_ν)( ∏_νϖ_ν^α_ν_ν^×),
where ϖ_ν is the local uniformizer in k_ν and α runs over the algebraic direct sum ⊕_ν∈|k|_f. Consider the integral
∫__f^×^≥| x_∞|_^-1ϕ^⋆_f(x_f)|x_f|_^s-1/2^×x_f,
where ϕ_f^⋆=⊗_νϕ_ν and for ramified places we take ϕ_ν as given in Lemma <ref> since each local component π_ν of π is irreducible and generic when n≥ 2.
We know ϕ_ν is supported on the the ring _ν of ν-integers according to Lemma <ref> and <cit.>. It follows that we may assume α_ν≥ 0 for all ν<∞.
If x_f belongs to the α=(α_ν)-component of (<ref>), then |x_f|=∏_νq_ν^-α_ν, where q_ν is the cardinality of
the residue field of k_ν. The range |x_f|_≥ |x_∞|_^-1 of the integral in (<ref>) is equivalent to the condition that
∏_νq_ν^α_ν≤ |x_∞|_. Since the integrand in (<ref>) is invariant under ∏_ν_ν^×, we obtain that
ϕ^⋆_f(x_f)|x_f|_^s-1/2
=
ϕ^⋆_f((ϖ_ν^α_ν))(∏_νq_ν^α_ν)^1/2(∏_νq_ν^-α_ν)^s
with (ϖ_ν^α_ν)∈_f^×. We may write any fractional ideal in k, in a unique way, as =∏_ν_ν^e_ν with
(e_ν)∈⊕_ν∈|k|_f, and regard the function ϕ_f^⋆ as
ϕ_f^⋆ =∏_ν_ν^e_ν↦ϕ_f^⋆((ϖ_ν^e_ν))=∏_ν∈|k|_fϕ_ν(ϖ_ν^e_ν)
Then the function ϕ_f^⋆ is supported on the set of integral ideals and (<ref>) can be written as
ϕ^⋆_f(x_f)|x_f|_^s-1/2
=
ϕ_f^⋆()·()^1/2·()^-s,
for any fractional ideal =∏_ν∈|k|_f_ν^α_ν.
According to the normalization of the Haar measure, the integral (<ref>) is equal to
∑_() ≤ |x_∞|_ϕ_f^⋆()·()^1/2·()^-s
=
∑_n≤ |x_∞|_(∑_()=nϕ^⋆_f())n^1/2n^-s.
where the summation runs over all the integral ideals of k.
On the other hand, for the particularly given Schwartz function ϕ^⋆_f∈_π_f(_f^×), we have that
L_f(s,π_f)
=∑_n=1^∞a_nn^-s=∫_^×_fϕ^⋆_f(x_f)|x_f|_^s-1/2^×x_f
=lim_|x_∞|→∞∫__f^×^≥| x_∞|_^-1ϕ^⋆_f(x_f)|x_f|_^s-1/2^×x_f
=lim_|x_∞|_→∞∑_() ≤ |x_∞|_ϕ_f^⋆()·()^1/2·()^-s
=∑_n=1^∞(∑_()=nϕ^⋆_f())n^1/2n^-s
for (s) is sufficiently positive.
By using the uniqueness of the coefficients of the Dirichlet series (see <cit.>), we obtain that
a_n=
∑_()=n(ϕ^*_f())n^1/2,
and hence
∫__f^×^≥| x_∞|_^-1ϕ^⋆_f(x_f)|x_f|_^s-1/2^×x_f
=∑_n≤ |x_∞|_a_n n^-s.
In order to define the dual kernel, we consider the local functional equation in the reformulation of the local Godement-Jacquet theory in <cit.>:
(1-s,_π_ν,ψ_ν(ϕ_ν))=γ(s,π_ν,ψ_ν)·(s,ϕ_ν).
By Proposition <ref> and <cit.>, we have that
γ(s,π_ν,ψ_ν)· L(s,π_ν)=ϵ(s,π_ν,ψ_ν)· L(1-s,π_ν).
Hence for (s) sufficiently negative, we obtain that
_f(1-s,_π_f,ψ_f(ϕ_f))
=
∫_^×_f_π_f,ψ_f(ϕ_f)(x_f)|x_f|_^1/2-s^× x_f
=(∏_ν<∞ϵ(s,π_ν,ψ_ν))· L_f(1-s,π_f).
If we write
(∏_ν<∞ϵ(1-s,π_ν,ψ_ν))L(s,π)=∑_n=1^∞a_n^*n^-s,
with a_n^*∈, then following the argument as in the proof of Proposition <ref>, we can obtain that
the dual kernel of the Godement-Jacquet kernel H_π,s(x,ϕ^⋆_f) with the particularly chosen ϕ^⋆_f can be written as
K_π,s(x)=K_π,s(x,ϕ_f^⋆)=|x|^s-1/2∑_n≤ |x|a_n^*n^-s.
With ϕ_f^⋆ as in Proposition <ref>, we have
K_π,s(x)=K_π,s(x,ϕ_f^⋆)=|x|^s-1/2∑_n≤ |x|a_n^*n^-s,
where {a_n^*} is defined via (<ref>).
We first claim that each local component of _π_f,ψ_f(ϕ_f^⋆) is invariant under _ν^×. In fact, if n=1, the claim is clear because the classical Fourier transform of an _ν^×-invariant functions is still _ν^×-invariant by changing variables. If n≥ 2, at ramified places, since ϕ_ν is as given by the fiber integration of f_ν and φ_ν as in Lemma <ref>, we know from <cit.> that
_π_ν,ψ_ν(ϕ_ν)(x)=|x|_ν^n/2∫__n(k_ν)_x_ψ_ν(f_ν)(g)φ_π_ν(g^-1) g,
where _ψ_ν is the classical Fourier transform given by (<ref>). If we write that
u^*:=diag(u,1,⋯,1)
for any u∈_ν^×, then we already know φ_π_ν((gu^*)^-1)=φ_π_ν(g^-1). Since
_ψ_ν(f_ν)(gu^*)=∫_M_n(k_ν)ψ_ν((xu^*y))f_ν(y)^+y=∫_M_n(k_ν)ψ_ν((xy))f_ν((u^*)^-1y)^+y,
and by the definition of f_ν, we see that f_ν((u^*)^-1y)=f_ν(y), we know _π_ν,ψ_ν(ϕ_ν) is invariant under _ν^×. At the remaining unramified places where ϕ_ν=_π_ν, we know _π_ν,ψ_ν(_π_ν)=_π_ν and by <cit.> we know _π_ν,ψ_ν(ϕ_ν) is invariant under _ν^×. Let S_f be as in Proposition <ref> for ϕ_f^⋆. Then there are integers a_1,⋯,a_κ such that the support of _π_f,ψ_f is contained in
(∏_ν∈ S_f(_ν^a_ν∖{0})×∏_ν∉ S_f(_ν∖{0}))∩_f^×.
Write
_f^×=_α=(α_ν)(∏_ν∈|k|_fϖ_ν^α_ν_ν^×).
It is clear that _π_f,ψ_f(ϕ_f^⋆) is constant on each α-component and supported on the α=(α_ν)_ν-component with α_ν≥ a_ν for ν∈ S_f and α_ν≥0 for ν∉ S_f. We may write any fraction ideal in k in a unique way as =∏_ν_ν^e_ν and regard the function _π_f,ψ_f(ϕ_f^⋆) as a function on the set of fractional ideals sending to
∏_ν∈|k|_f_π_ν,ψ_ν(ϕ_ν)(ϖ_ν^e_ν).
Then we obtain that
_π_f,ψ_f(ϕ^⋆_f)(x_f)|x_f|_^s-1/2
=
ϕ_f^⋆()·()^1/2·()^-s
for x in the α=(α_ν)_ν-component, where =∏_ν∈|k|_f_ν^α_ν. Write =∏_ν∈|k|_f^a_ν, where for ν∈ S_f a_ν's are defined from (<ref>) and for ν∉ S_f we define a_ν=0. Then by the same argument, we obtain that
a_n^*=∑_⊂,()=n_π_f,ψ_f(ϕ_f^⋆)()()^1/2()^-s,
where the summation runs over all fractional ideal of k that are contained in with norm n
and
K_π,s(x)=|x|^s-1/2∑_n≤ |x|a_n^*n^-s.
Therefore we obtain a π-version of <cit.> when the kernel functions H_π,s and K_π,s are given in terms of the L-function L_f(s,π_f) with its Dirichlet series expression.
For any π∈_(_n), if the Godement-Jacquet kernel H_π,s and its dual K_π,s are defined as in Proposition <ref> and in (<ref>), respectively,
then
H_π,s(x)
=-
_π_∞,ψ_∞(K_π,1-s)(x)
=
-∫__∞^×k_π_∞,ψ_∞(x y)K_π,1-s(y)^×y.
as distributions on _∞^× if and only if s is a zero of L_f(s,π_f). Any unexplained notation is the same as in Theorem <ref>.
alpha
§.§ Some questions
Clozel's Theorem 1.1 has the condition that (s)∈(0,1). What happens to our case?
By the test function at ν<∞, we define ϕ_f^⋆ and define the kernel H_π,s and its dual kernel K_π,s.
How to show that those kernels recover what Clozel defined in his paper <cit.>?
How to compute the Mellin transform of those kernel functions H_π,s and K_π,s? and verify Conjecture E of <cit.> for n≤ 3?
*******************We do not need this part***********************
The kernels should be holomorphic as functions in s∈
As for the smoothness, let {a_n}_n=1^∞ be any series such that a_n→ 0 as n→∞. Consider now
1/a_n(H_π,s+a_n(x_∞,ϕ_f) -H_π,s(x_∞,ϕ_f))=∫__f^×^>|x_∞|^-1_ϕ_f(x_f)|x_f|_^s+a_n-1/2-|x_f|^s-1/2/a_n^×x_f
=∫__f^×^>|x_∞|^-1_( s(x_f,a_n)-1/2)ϕ_f(x_f)ln|x_f|_· |x_f|_^s(x_f,a_n)-3/2^×x_f
for some s(x_f,a_n)∈[s,s+a_n] depending on both x_f and a_n. Taking note that |ln x|≤max{x,1/x} when x>0, then we have
∫__f^×^>|x_∞|_^-1|(s(x_f,a_n)-1/2)ϕ_f(x_f)ln|x_f|_· |x_f|_^s(x_f,a_n)-3/2|^×x_f
≤max{ |s-1/2|,|s+a_n-1/2| }∫__f^×^>|x_∞|_^-1|ϕ_f(x_f) (|x_f|_^min{s,s+a_n}-5/2 +|x_f|_^max{s,s+a_n }-1/2) |
^×x_f
<∞
according to the above estimation (<ref>). And since
lim_n→∞ϕ_f(x_f)|x_f|_^s+a_n-1/2-|x_f|_^s-1/2/a_n=(s-1/2)ϕ_f(x_f)ln |x_f|_· |x_f|_^s-3/2
pointwisely for any x_f∈_f^×, hence by the dominant convergence theorem we see
lim_n→∞1/a_n(H_π,s+a_n(x_∞,ϕ_f)-H_π,s(x_∞,ϕ_f))=∫__f^×^>|x_∞|_^-1(s-1/2)ϕ_f(x_f)|x_f|_^s-3/2^×x_f
Therefore H_π,s(x_∞,ϕ_f) is differentiable and by an induction argument we see H_π,s(x_∞,ϕ_f) is smooth with respect to s∈.
*************************
|
http://arxiv.org/abs/2306.03553v1
|
20230606100812
|
An Approach to Solving the Abstraction and Reasoning Corpus (ARC) Challenge
|
[
"Tan John Chong Min"
] |
cs.AI
|
[
"cs.AI"
] |
Machine Unlearning: A Survey
Philip S. Yu
============================
We utilise the power of Large Language Models (LLMs), in particular GPT4, to be prompt engineered into performing an arbitrary task. Here, we give the model some human priors via text, along with some typical procedures for solving the ARC tasks, and ask it to generate the i) broad description of the input-output relation, ii) detailed steps of the input-output mapping, iii) use the detailed steps to perform manipulation on the test input and derive the test output. The current GPT3.5/GPT4 prompt solves 2 out of 4 tested small ARC challenges (those with small grids of 8x8 and below). With tweaks to the prompt to make it more specific for the use case, it can solve more. We posit that when scaled to a multi-agent system with usage of past memory and equipped with an image interpretation tool via Visual Question Answering, we may actually be able to solve the majority of the ARC challenge.
§ BACKGROUND
The ARC Challenge is a very interesting challenge, as it is doing something counter to mainstream deep learning – learning from very few samples. Deep learning typically uses tens of thousands of samples to do well, for instance learning to classify digits (MNIST) <cit.> requires around 50,000 training samples. Humans, in comparison, can learn how to identify different animals by just one or two different observations. For instance, my 3 year-old kid can learn how to identify a giraffe in real life for the first time, even though the only other time he was exposed to a giraffe was through a cartoon flash card. Such capabilities are not well endowed in modern AI systems, and that means that such AI systems will need to be trained extensively before deploying in the real world. After deploying them in the real world, they will also be limited in their ability to adapt and learn as the environment changes.
In contrast, traditional rule-based systems (e.g. GOFAI) can “learn” quite fast, as any new situation can be interpreted without any learning phase, provided that the situation is already in the system rules given to it. Such a rule-based system could be symbolic systems or expert systems which already have the domain knowledge fed to it by human experts. However, the history of GOFAI has shown that it is difficult to engineer these rules out, and at many times, even humans face difficulty to come up with the rules as they may not be able to express it in words.
As you can see, there are shortcomings with the above two approaches, and a new kind of approach will need to be used in order to learn fast and generalise to new situations, in order to even have a chance at solving the ARC Challenge.
§ NEXT TOKEN PREDICTION FOR SELF-SUPERVISED LEARNING
There is a lot of structure in the world. These structures can be hard to represent via verbal rules, yet children can learn how physics work and how to interact with the world just by observation and action. Personally, I believe that simply observing is not enough – one has to perform actions in order to learn how one’s actions can affect the world. However, for tasks like learning language, the next action to take is simply to predict the next token and can be done without interaction with the world. Large Language Models (LLMs) such as GPT2 <cit.>, GPT 3.5 <cit.> and GPT4 <cit.> have utilised an extensive amount of self-supervised learning via next-token prediction in order to learn the structure of text (See Fig. <ref>). This is a huge breakthrough, as the predominant approach to deep learning - supervised learning - requires extensive human labelling and is expensive and impractical to obtain for large amounts of data. This self-supervised learning approach can generate labels simply by predicting the next token and is easily obtainable from the world's worth of text on the World Wide Web. For instance, the sentence "The cat sat on the mat" can easily be used in at least 5 different prediction tasks (assuming tokens are defined at the word level), as shown below:
* The → cat
* The cat → sat
* The cat sat → on
* The cat sat on → the
* The cat sat on the → mat
High sample efficiency. This means that the observations from the world can be reused in multiple input-output pairs and there is very high sample efficiency due to such a self-supervised learning method able to reuse the same sections of text multiple times.
Iterative processing of semantic meaning. Moreover, the Transformer architecture actually allows the embeddings of each token to be infleunced by the most similar and closest neighbours via self-attention (via a combination of token embeddings plus position embeddings), which allows for the input representation to be refined in an iterative fashion, solving the case of ambiguous inputs or polysemy (multiple meanings of the same word). Such a hierarchical structure is illustrated in Fig. <ref>.
Feedback connections. I always believe that current deep learning methods suffer from lack of feedback connections to ground the lower levels of processing - it is widely known that in the brain neurons do not just exist in feedforward connections but also have a lot of feedback connections as well. However, recently, observing that increasing the size of Transformers was already sufficient to achieve better and better performance, such as in GPT3.5 and GPT4, I start to wonder if there is indeed a way for Transformers to ground the earlier layers' processing in the later layers' processing. I hypothesise that it is actually able to do some form of feedback grounding, because of the skip connections present between decoder blocks, as illustrated in Fig. <ref>. The embeddings at the lower levels can actually be passed all the way to the later layers (largely unchanged except for LayerNormalisation, which affects all embeddings similarly), and can be processed in the same layer with potential grounding by the embeddings of the later layers. This is extremely powerful, and can actually ground the input processing with knowledge gained at the later part. For instance, in the text "The following did not happen: John went to the market and bought a bunch of eggs, vegetables and meat.", we are able to interpret the entire text in the opposite semantic meaning just because of the words "The following did not happen" at the beginning of the sentence. In fact, as will be discussed in the next section, this presence of skip connections may be the way prompting and grounding in earlier context is so effective in LLMs.
§ PROMPTING AND ZERO-SHOT/FEW-SHOT LEARNING
Given that LLMs are seemingly able to perform inference at multiple scales of abstraction (see earlier section), this opens an avenue of approaches whereby we can just tell the LLM what we want to do in natural language, and use it to ground the generation. Such an instruction-based method of conditioning generations has proven useful in multiple natural language tasks, as shown in the usage of LLMs flexibly by just an instruction to prompt the task in GLUE <cit.> and SuperGLUE <cit.> benchmarks.
§.§ Zero-shot learning
LLMs are also able to do zero-shot learning very well. For instance, it is able to do zero-shot classification of new contexts simply by using semantic meaning of the tokens it has encountered during training:
-2-2
"You are a classification model meant to classify the context of an input.
Context A: In the garden
Context B: In the hospital
Context C: In the mountains
Context D: In the sky
Give the contexts for the following inputs:
1. Wow, the clouds are so fluffy today
2. The IV drip is running out, get a nurse
3. The sheep on the pasture are so pretty
4. Have you watered the flowers today?
Return in the following form:
Number: Context Letter"
ChatGPT (GPT3.5, May 3 2023 version) returns the following output, which are in general correct:
-2-2
1. D: In the sky
2. B: In the hospital
3. C: In the mnountains
4. A: In the garden
§.§ Few-shot learning
LLMs are also able to do few-shot learning pretty reliably. For instance, it is able to do few-shot classification of odd and even numbers from just a few sample input and output pairs. In order for to generate consistently, it needs to be given the framework of what the task is about and the possible outputs to ground the generation. Here is the example prompt given:
-2-2
You are a classification machine meant to classify between output A and B.
Input: 5
Output: A
Input: 7
Output: A
Input: 8
Output: B
Input: 10
Output: B
Input: 13
Output:
ChatGPT (GPT3.5, May 3 2023 version) returns the following output, which is correct:
-2-2
A
Hence, a trained LLM has shown that it can be equipped with the knowledge of a new task either through zero-shot description-based prompting, or few-shot example-based prompting, and can be the basis of a fast learning system that is adaptive to real-world inputs. Given the quick learning ability of LLMs via prompting, it is no wonder why prompt engineering quickly became very popular following the rise of larger LLMs.
§ GETTING THE LLM TO REVERSE ENGINEER THE INSTRUCTION
LLMs are actually capable of observing multiple input-output pairs and coming up with an instruction to derive the relation between them <cit.>. Furthermore, the Language annotated Abstraction and Reasoning Corpus (LARC) showed that 88% of the original ARC tasks can be represented in a text instruction where another human can solve it without needing the input-output examples <cit.>. Another paper has also highlighted the efficiency of prompt-based instructions, as one prompt can be worth 100s of training examples on various classification tasks <cit.>.
The difficulty of the ARC challenge is that the machine (or human) needs to infer instructions based on limited examples. These instructions are usually difficult to deduce, as one needs to find the pattern with very few sample input-output pairs. However, once the instruction is deduced, it is very easily communicable to other humans using text. Hence, we reframe the ARC Challenge with the following steps:
* Deduce the input-output mapping rule using the LLM from the input-output examples
* Apply this rule to the test input to get the test output
§ CHAIN OF THOUGHT
It is often difficult to do planning on complicated tasks which involve multiple steps. The ARC Challenge sometimes also involves multiple manipulations of the input image in order to derive the output. For this kind of problems, we can utilize approaches such as Chain of Thought (CoT) prompting <cit.>, which uses demonstrations of details like the steps for mathematical computation to train the language model. Moreover, we do not even need to provide the human-labelled detailed demonstration as shown in the CoT paper, but can get the LLM to generate its own thoughts. The "ReAct: Synergizing reasoning and acting in language models" paper shows how one way of prompting the LLM for it to generate detailed thoughts and act upon it <cit.> - using the Thought, Action, Observation framework.
Hierarchical Planning. CoT is still a largely linear way to do planning, as it involves having the previous action or plan before generating the next one. More recently, LLMs have been utilised in a hierarchical fashion, whereby the first step involves coming up with the broad plan, and the second step is to come up with the details. This is utilised in HuggingGPT <cit.> and AutoGPT <cit.> to generate an overall plan before breaking down into the detailed steps. This way of hierarchical planning was also used in the Generative Agents paper <cit.> to generate a detailed action plan for an agent's day.
This approach of hierarchical planning is actually quite similar to how humans think. We do not have a detailed plan of our day right at the beginning, but think in a broad way like doing work in the morning, lunch, meet friends in afternoon, home in the evening and so on. Then, when prompted why do you want to do this, we go up a layer of abstraction to think about the goals of our lives. When prompted how do you want to do this, we go down a layer of abstraction to think about the specifics of the various plans of our lives. Hence, explicitly prompting the LLM to come up with the broad plan, and then using the broad plan to ground the generation for the detailed plan is a promising approach. It also helps circumvents the problem of the LLM having limited planning abilities, as we can plan the broad steps first, which are usually much shorter than the entire sequence of detailed steps.
§ GROUNDING IN HUMAN BIASES
The ARC Challenge is difficult for computers because there is a huge number of possibilities to interpret high-dimensional real-world data, but easy for humans because humans can curate the possibilities based on some innate biases, like that of the Gestalt principles <cit.>. In fact, without such innate biases, it can be difficult for anyone to learn quickly in the real world. <cit.> wrote a book, "Born Knowing", which highlights that chicks come born with plenty of innate biases like preference for animate objects, which could help them learn faster. Similarly, human newborns come with a preference for face-like objects to help with recognition of the mother. Some human behaviours like suckling are also innate, rather than learnt, to facilitate survival.
Alas, we may not be born tabular rasa like what is done in AlphaZero <cit.>. In experiments with AlphaZero, it takes weeks with a single GPU just to learn how to play well enough to win a human <cit.> in a 4-in-a-row Tic-Tac-Toe game in a 7x7 grid with an unplayable position. Simply changing the unplayable position was enough to cause AlphaZero to become weaker than humans, and extensive training of various random unplayable positions was required for it to learn. Hence, for generalisability, pursuing optimality in Reinforcement Learning from a clean slate like that in static games like Chess or Go may not be the way to go. Rather, we need to ground the possibilities of what we need to do or interpret perception with some innate bias or some past experience in order to learn fast and be generalisable.
Since LLMs like GPT4 are not able to be trained to a new set of input-output due to constrains of API, we utilise prompting to instill the human biases required for the machine to reduce the possibilities of interpreting the input-output pairs of the ARC Challenge.
§ NAÏVE METHOD (SINGLE PROMPT)
Given that LLMs have proven effective at learning an arbitrary task just by prompting, we try to do a naïve method of getting it to solve ARC tasks just from a single prompt alone. This prompt should be as generalisable as possible and should not be fine-tuned to any single one task.
Using the above ideas of grounding in human biases, CoT prompting and getting LLMs to come up with broad descriptions, then detailed steps, and then using the detailed steps to map from test input to test output, we come up with an example prompt for ARC as given below:
-2-2
“You are given a series of inputs and output pairs.
These are all in the form of a 2D array, representing a 2D grid, with values from 0-9.
The values are not representative of any ordinal ranking.
Input/output pairs may not reflect all possibilities, you are to infer the simplest possible relation making use of symmetry and invariance as much as possible.
The input can be something like:
> entire grid being the sandbox to manipulate
> using a part of the grid (individual squares or portions of the grid) to depict instructions of how to do the task. symmetry is important.
> using regions of similar value to depict area for answer of the task
The output can be something like:
> same output size as input after performing action
> output one of the fixed predetermined patterns used to classify the input image
> using output to show the ordering of objects, such as by size, height, width, position, value
Each of the input-output relation can be done with one or more actions chained together, which could be something like (not exhaustive):
- object view (defined as continuous squares connected horizontally, vertically and/or diagonally, separated by 0 values)
> objects can be of the same value, or different values combined together
> objects may be hidden beneath other objects
> rotating or shifting objects
> changing value of object
> objects can be manipulated and mapped to a different number of output squares
> different objects may be manipulated differently based on context
- overall view
> rotation / reflection symmetry
> continuation of a pattern
> changing values
- segment view
> combine two segments of the input into one single one based on a simple rule
> rule can be certain values are prioritized over others, or combination of values into new ones
Do the following:
- What is the broad description of the input/output relation that holds for all input/output pairs?
- What is the step by step description of the input/output relation that holds for all input/output pairs?
- Apply this description to the test input and find out the answer 'to_be_filled'.”
[Insert .json for task here with all the input-output pairs in json format, with the test output replaced by 'to_be_filled']
The method to derive the json format is simply replacing the output section of the original json format from the ARC Challenge 2 dataset with 'to_be_filled'. The code to do so can be found here: <https://github.com/tanchongmin/ARC-Challenge/blob/main/arc_challenge.ipynb>
§.§ Example: Public Evaluation Task 157 (66e6c45b.json)
Here is the modified .json for this task without test output:
-2-2
{'train': [{'input': [[0, 0, 0, 0], [0, 3, 4, 0], [0, 7, 6, 0], [0, 0, 0, 0]], 'output': [[3, 0, 0, 4], [0, 0, 0, 0], [0, 0, 0, 0], [7, 0, 0, 6]]}, {'input': [[0, 0, 0, 0], [0, 5, 6, 0], [0, 8, 3, 0], [0, 0, 0, 0]], 'output': [[5, 0, 0, 6], [0, 0, 0, 0], [0, 0, 0, 0], [8, 0, 0, 3]]}], 'test': {'input': [[0, 0, 0, 0], [0, 2, 3, 0], [0, 4, 9, 0], [0, 0, 0, 0]], 'output': 'to_be_filled'}}
Here is the output for Public Evaluation Task 157 (66e6c45b.json) using GPT3.5:
-2-2
Broad Description:
The input/output relation involves manipulating the input grid by applying a series of transformations based on certain rules and symmetries to produce the output grid.
Step-by-Step Description:
Identify any objects in the input grid based on contiguous squares with the same value.
Apply transformations to the identified objects based on their position, symmetry, and value. These transformations may include rotating, reflecting, shifting, or changing the value of the object.
Combine or split objects as needed based on certain rules or conditions.
Repeat steps 1-3 until the output grid is produced.
For this specific test input, we can apply the following transformations to obtain the output grid:
Identify two objects with values 2 and 3 respectively.
Rotate the object with value 2 by 90 degrees counterclockwise and shift it to the right to fill the empty space.
Rotate the object with value 3 by 90 degrees clockwise and shift it to the left to fill the empty space.
Combine the two objects to form the output grid:
[[2, 0, 0, 3],
[0, 0, 0, 0],
[0, 0, 0, 0],
[4, 0, 0, 9]]
Therefore, the answer is [[2, 0, 0, 3], [0, 0, 0, 0], [0, 0, 0, 0], [4, 0, 0, 9]].
§.§ Example: Public Evaluation Task 162 (68b67ca3.json)
Here is the modified .json for this task without test output:
-2-2
{'train': [{'input': [[2, 0, 2, 0, 1, 0], [0, 0, 0, 0, 0, 0], [2, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0], [3, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]], 'output': [[2, 2, 1], [2, 1, 0], [3, 0, 0]]}, {'input': [[3, 0, 0, 0, 2, 0], [0, 0, 0, 0, 0, 0], [8, 0, 0, 0, 8, 0], [0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0]], 'output': [[3, 0, 2], [8, 0, 8], [0, 1, 0]]}, {'input': [[1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 2, 0, 0, 0], [0, 0, 0, 0, 0, 0], [6, 0, 0, 0, 6, 0], [0, 0, 0, 0, 0, 0]], 'output': [[1, 0, 0], [0, 2, 0], [6, 0, 6]]}], 'test': {'input': [[3, 0, 3, 0, 4, 0], [0, 0, 0, 0, 0, 0], [7, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0], [7, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]], 'output': 'to_be_filled'}}
Here is the output for Public Evaluation Task 162 (68b67ca3.json) using GPT4:
-2-2
Broad description of the input/output relation:
The input/output relation involves extracting non-zero elements from specific positions in the input grid and arranging them into a smaller 3x3 output grid, maintaining their relative positions.
Step by step description of the input/output relation:
Identify non-zero elements in the following positions of the input grid: (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), and (5,5).
Place these non-zero elements into the corresponding positions in a 3x3 output grid.
Applying this description to the test input:
Test input:
[[3, 0, 3, 0, 4, 0],
[0, 0, 0, 0, 0, 0],
[7, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0],
[7, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0]]
Following the step-by-step description, we extract the non-zero elements from the specified positions:
(1,1): 3, (1,3): 3, (1,5): 4, (3,1): 7, (3,3): 0, (3,5): 1, (5,1): 7, (5,3): 0, (5,5): 1
Place these elements into the 3x3 output grid:
[[3, 3, 4],
[7, 0, 1],
[7, 0, 1]]
So, the output for the test input is:
[[3, 3, 4],
[7, 0, 1],
[7, 0, 1]]
§.§ Evaluation
This naïve approach has some success with the smaller ARC tasks. So far, with limited testing, this naïve method on GPT3.5 or GPT4 has solved the following tasks out of 4 tested tasks on the Evaluation set: 157 (66e6c45b.json), 162 (68b67ca3.json). These two tasks have failed, although only slightly and are likely to be solved with more specific prompt engineering: 158 (66f2d22f.json), 170 (6ea4a07e.json). See the testing of GPT4 on the ARC Challenge via this url: <https://www.youtube.com/watch?v=vt2yG1da8Fg>. With some more fine-tuning of the actions that can be performed, I believe we can get it to work for more tasks. The key takeaway is that prompting can help to ground the model to think of feasible solutions it would otherwise not have.
That said, the json input for the 2D array is not a great one to extract object-level relations, and the prompt needs to continuously ask for GPT4 to think of the input as an object. The prompt is intended to be very generic and gives the broad input-output relation, along with some tips as to how prior ARC puzzles can be solved. As GPT4 is not that great at doing detailed planning, we follow the hierarchical approach done by HuggingGPT <cit.> or AutoGPT <cit.>, and ask the model to list out the broad description first. Thereafter, after being grounded by the broad description, the model then generates the detailed step by step description. This description is then used to get the answer by applying these steps to the test input.
Initially, I tried to get GPT4 to output a Python program to handle the manipulation from input to output. While this could work for simple problems, in general, I find that the program output generated may be different from the intention in the step by step description, and in general, the step by step description in words was more accurate. As such, the example prompt above did not ask for a program output from GPT4.
§ IMPROVEMENTS TO THE NAÏVE METHOD
Following my experiments with the naïve method, I have identified the following issues:
* Limited understanding of what an object is from the json file
* Limited context length to store json representations of large grids / multiple input-output samples
* Limited context length to store instructions
* Limited fact-checking abilities to determine if the input-output relation derived is correct
These are the potential solutions to the above issues:
* In order to do the ARC challenge well, it would be good to imbue in the model a sense of what an object is, and also how images look like in the real world. This is because there are some ARC challenges which use concepts like object permanence, gravity, which would be present in real world situations but not for a computer which is only trained on pixels in the ARC challenge. As such, we could take a leaf from the Visual Question Answering (QA) domain <cit.>, and give the LLM the ability to ask questions about the input and output images and iteratively refine its input-output relation based on it. This Visual QA could be done with the base model as images in the wild, but should be fine-tuned with past ARC challenge data, as the distribution of pixel information between the real world and ARC challenge dataset may be different, though the concepts may be the same. My hypothesis is that pixel-based representation may be too high-dimensional to model the world, hence, being able to compress it down to low-dimensional text via Visual QA would be a huge plus for interpretability.
* Instead of putting all the input-output examples in the same json, we can separately ask the model to give a description for each of the input-output pairs. Then, we can prompt another model to find similarities between each of the descriptions of these input-output pairs and collate to a general input-output representation.
* Instead of having only one GPT model to give the prompt for the instructions, we could split the prompt into multiple parts. For instance, the object view can be one model, the overall view can be another model, the segment view can be another model, and so on. This would mean that we can ground the instructions in more fine-grained action space that would increase the likelihood of solving the ARC challenges. We can then select the best performing instruction, by asking the various models to come up with different sets of input-output instructions, and collating them into one pool of potential instructions. Then, we can evaluate all of them and use the best one.
* We could have a separate GPT model to evaluate the input-output mapping. This model takes in the pool of potential instructions generated by the above steps, and evaluates them one by one. The moment any of the instructions fails to generate the input-output map of the training cases, it is discarded. This approach of generating more potential mappings and discarding them based on grounding by the training set is used in AlphaCode, where they generate multiple programs by simply changing the hyperparameters or by using more random generations of the LLM, and then eliminate those non-performant ones that do not give the right output in the training cases <cit.> (See Fig. <ref> for an illustration). Currently, I envision this model to take in just the instruction and the input json, and output the json after the instruction and check that it matches with the actual output json. An alternative is to ask GPT4 to come up with the Python code to do the input-output mapping, and then run the code to check for correct output - I suspect this may be inferior due to problems mapping to the right Python program from the instruction.
§ GPT AS A SYSTEM
With the recent trends of utilising multiple LLMs together as a system, such as in AutoGPT <cit.>, it could potentially allow the model to scale better by off-loading various tasks to different LLM models, and letting all these models work together in a large ecosystem. Such a model is outlined in the Improvements section above, and more can be tuned in order to make the system as performant as possible.
§ MEMORY AS THE WAY AHEAD
Given that we are not able to train the weights of GPT to fit to the training set of the ARC challenge, using memory is the best way to go about imbuing the model with learnt knowledge. Humans learn very fast because we have memory to ground our current experiences and we can choose the best action based on what we have seen in the past. For example, if I see a snake on Path A, I will avoid Path A next time and choose Path B instead. This instantaneous way of learning is something that is not natural in deep learning, as it typically takes hundreds or thousands or more iterations in order to update the weights sufficiently so that it can learn well for Deep Learning, such as in Deep Reinforcement Learning. A more detailed explanation can be found in "Learning, Fast and Slow" <cit.>.
Currently the naïve method does not use memory of what has been stored earlier. If we were to use memory, I posit the best way to use it will be via text descriptions of the broad and detailed input-output relations stored from the earlier training examples. This would make the memory more generic rather than storing memory of the images. We then have two memories of instructions, one I call BroadInstruct, and the other DetailedInstruct, which details the broad description and detailed steps of earlier instructions from earlier ARC tasks. I can envision a system using it to be as such:
* Use the naïve method to determine the broad description of the task
* From the broad description, retrieve from a database (e.g. Pinecone) using OpenAI Vector Embeddings <cit.> or similar embeddings to retrieve the top k neighbours from BroadInstruct. k is a hyperparameter that can be tuned, and can be set to 5 by default.
* Conditioned on the top k neighbours as context, perform retrieval-augmented generation <cit.> to generate the refined broad description of the task
* Repeat the earlier steps until convergence
Now, having generated the broad description of the task, we move on to generate the detailed steps.
* Use the naïve method with the broad descrption as context to determine the detailed steps of the task
* From the generated detailed steps, retrieve from a database (e.g. Pinecone) using OpenAI Vector Embeddings <cit.> or similar embeddings to retrieve the top k neighbours from DetailedInstruct. k is a hyperparameter that can be tuned, and can be set to 5 by default.
* Conditioned on the top k neighbours as context, perform retrieval-augmented generation <cit.> to generate the refined detailed steps of the task
* Repeat the earlier steps until convergence
Hence, we can utilise past knowledge of earlier ARC tasks for more accurate conditioning of the broad description and detailed steps needed for future ARC tasks. If the task is solved, we can then add in this broad and detailed description into BroadInstruct and DetailedInstruct respectively.
Apart from imbuing learning ability, retrieval-augmented generation has an added benefit of increasing the consistency of the LLM-generated output, as it is more in line with what is required, which may be helpful with getting the right solution in fewer generations. For more complicated problems (more complex than ARC challenge), in order to constrain memory storage given a limited storage space, we can also selectively store memory based on how surprising it is and how "emotional" the experience is. These can be explored in future challenges where there is too much perceptual information and memory storage is a constraint, but for ARC, I believe we can just keep all the memory as the number of ARC tasks are not large.
§ CONCLUSION
Overall, the ARC challenge is a very unique one, and can serve to pave the way for systems that are fast learning and can generalise well to arbitrary tasks. With the right innate biases via prompting, the right hierarchical structure to condition generation of detailed steps from broad description, a multi-agent architecture to split long prompts up into performant smaller sub-systems, a better way to interpret images using Visual QA, as well as better learning and grounding in past memory, I posit that GPT4 can eventually be made to solve the majority of the ARC tasks.
neurips
|
http://arxiv.org/abs/2306.10849v1
|
20230619110036
|
Detection of seven 2+2 doubly eclipsing quadruple systems
|
[
"P. Zasche",
"Z. Henzl",
"M. Masek",
"R. Uhlar",
"J. Kara",
"J. Merc",
"H. Kucakova"
] |
astro-ph.SR
|
[
"astro-ph.SR"
] |
Petr Zasche, [email protected]
^1 Charles University, Faculty of Mathematics and Physics, Astronomical Institute, V Holešovičkách 2, CZ-180 00, Praha 8, Czech Republic
^2 Hvězdárna Jaroslava Trnky ve Slaném, Nosačická 1713, Slaný 1, 274 01, Czech Republic
^3 Variable Star and Exoplanet Section, Czech Astronomical Society, Fričova 298, 251 65 Ondřejov, Czech Republic
^4 FZU - Institute of Physics of the Czech Academy of Sciences, Na Slovance 1999/2, CZ-182 00, Praha, Czech Republic
^5 Astronomical Institute, Academy of Sciences, Fričova 298, CZ-251 65, Ondřejov, Czech Republic
^6 Research Centre for Theoretical Physics and Astrophysics, Institute of Physics, Silesian University in Opava, Bezručovo nám. 13, CZ-746 01, Opava, Czech Republic
Detection of seven doubly eclipsing quadruples
Zasche et al.
In this work, we study a heterogeneous group of seven stellar systems for the first
time. Despite their different distances or spectral types, all of them belong to a very rare group
of quadruple systems of 2+2 architecture, where both of the inner pairs harbor eclipsing binaries.
These systems are: ASASSN-V J102911.57-522413.6 (inner periods 0.57272, and 3.79027 days), V1037
Her (0.78758 and 5.80348 days), WISE J181904.2+241243 (0.36713 and 0.41942 days), V2894 Cyg
(2.57434 and 1.30579 days), NSVS 5725040 (1.79368 and 0.76794 days), WISE J210230.8+610816
(1.84324 and 0.57159 days), and ZTF J220518.78+592642.1 (2.79572 and 3.34615 days). Their outer
mutual periods are: 9.3, 25.4, 18.7, 27.5, 2.6, 2.2, and 14.0 yr, respectively. These outer
periodicities were derived using longer time span of photometric observations of these systems and
analysing their period changes of both inner pairs via ETVs (eclipse-timing variations). Most of
these studied systems are detached, as evidenced by the proper modelling of their light curves. A
few of them show significant eccentric orbits with apsidal motion (e.g. V2894 Cyg, and NSVS
5725040). Further spectroscopic follow-up observations would offer a better characterization of
the component star's parameters (for e.g. NSVS 5725040), as well as a potential interferometric
detection of the systems as real doubles on their mutual orbits (for e.g. V1037 Her). A rather
interesting excess of systems close to a 3:2 mean motion resonance is seen only for early
spectral-type stars with higher temperatures.
Detection of seven 2+2 doubly eclipsing quadruple systems
Zasche, P. 1,
Henzl, Z. 2,3,
Mašek, M. 3,4,
Uhlař R.3,
Kára, J. 1,
Merc, J. 1,
Kučáková, H. 1,3,5,6
Received July 31, 2023; accepted ???
================================================================================================================================================================
§ INTRODUCTION
Many important findings derived from classical studies of eclipsing binaries (hereafter EBs) are
still relevant today. Despite the fact that these methods are about a century old, EBs still
represent useful tool for deriving many astrophysical parameters for stars and their orbits and
for studies of stellar populations, their formation mechanisms, stellar structures, evolution, and
so on (see e.g. , or ).
The study of quadruples comprising of two eclipsing binaries with 2+2 architecture is still a
quite novel topic, since the first so-called doubly eclipsing system (V994 Her) was discovered by
<cit.>. Having two distinct sources of eclipses, which can, in principle, be
modelled independently, there are many more constraints that should be taken into account. Thus,
independent analyses should lead to the same findings for distance, same age, same metallicity,
and so on. To prove its quadruple nature, we would have to confirm that both of them actually
orbit around a common barycenter via spectroscopy, interferometry, or eclipse-timing variation
(ETV) analysis signals of both inner eclipsing binaries. We chose the last method one for the
study presented here, given its long-term collection of photometric data that spans many years –
up to a few decades.
The group of doubly eclipsing systems showing two periods has expanded in recent years, now
counting more than 350 stellar systems. However, detailed analyses that have definitively proved
their architecture as a 2+2 quadruple are still relatively rare. These have mostly been systems
on very short mutual orbits that show usually large dynamical interactions, published by group of
authors associated with T. Borkovits & S. Rappaport
<cit.>. These studies even
include the discovery of a sextuple system of three eclipsing binaries
<cit.>. In addition, there were have also been discoveries made by our group,
focusing mainly on systems with longer mutual orbital periods, carried out on the basis of
archival photometry and our own data <cit.>. The topic of close, dynamically interacting
multiples was comprehensively summarized in a recent review by <cit.>.
§ THE SELECTED SYSTEMS
Our process for choosing these specific systems was relatively straightforward. We tried to scan
many potential doubly eclipsing systems and attempted to identify the ones that obviously exhibit
some variations of period for both inner eclipsing pairs. Such variations in their eclipse times
have to be adequately covered for both A and B pairs, and have to be in opposition to each other
for A and B, respectively. This usually means that such a multiple system should also have both of
its eclipsing periods adequately observed in a range of older, ground-based data from different
databases to also be able detect the eclipses of both pairs and to derive the eclipse times as
well. For some of the systems, this was quite problematic, especially as the data suffer from
large uncertainties. We note that at least some indication of a movement of both pairs around a
common barycenter was detected. The necessity of all these systems being also visible in the
older, ground-based data led to slightly brighter systems (10-15mag), which are located in both
the southern and northern hemispheres.
The selected systems were chosen from our recent publication of new candidates of doubly eclipsing
stars showing two sets of eclipses <cit.>, along with one system from
publication by <cit.>. Two others from our sample are presented here for the
first time as doubly eclipsing quadruples, namely, WISE J181904.2+241243, and NSVS 5725040. We
refer to Table <ref> for the summary of basic information about these stars, their
various catalogue naming, and positions on the sky.
§ PHOTOMETRIC DATA USED FOR THE ANALYSIS
The photometric data used in the current study can be divided into two parts. At first, these are
the super-precise data from Transiting Exoplanet Survey Satellite (TESS,
). These data were used for the light curve modelling of both inner
eclipsing binaries to derive their basic properties, such as relative radii, inclinations, and
fractional luminosities. These data were extracted from the TESS archive using the
lightkurve tool <cit.>. Typically, several TESS sectors of data are
available for each of the stars.
In addition, we also used older, ground-based archival photometry for certain stars. These data
are very useful for us when trying to trace the period variations of both pairs via the ETV
method. Much more scattered photometry than the TESS data provide us with a very useful source of
data thanks to their time spans, which often go back several decades. Without these data, it
would be very difficult to prove the ETV and definitively confirm the mutual movement of both
binaries only using the TESS archive.
In addition, several dozens of nights of observation for these targets were also carried out for
the purposes of this study. These heterogeneous data were secured at several observatories: 1.
Ondřejov observatory in Czech Republic, using a 65-cm telescope and G2-MII CCD camera equipped
with standard V and R photometric filters; 2. Danish 1.54-m telescope on La Silla in Chile,
remotely controlled, using the R and I filters; 3. FRAM 25-cm telescope located on La Palma
(Observatorio del Roquede de los Muchachos, see ); 4. FRAM 30-cm
telescope located in Argentina (part of the Pierre Auger observatory, see
); 5. Three different private observatories in Czech Republic, using
smaller telescopes, with observers from the team of co-authors: M.Mašek, R.Uhlař, and
Z.Henzl.
For the reduction of all these data, standard procedures using dark frames and flat fields were
used, and the photometry was derived using standard aperture-photometry tools. All of these
photometric data points were only used for calculating precise times of eclipses for tracing the
ETVs with a higher degree of conclusiveness. All of these dedicated observations of the stars are
plotted as red symbols in the figures included throughout this paper.
§ ANALYSIS
For the light-curve (hereafter LC) modelling, we used the well-known programme PHOEBE
<cit.>, which is originally based on the Wilson-Devinney algorithm
<cit.>. However, having no radial velocities for our object, several
simplifying assumptions had to be made prior to fitting the individual LCs. For example, the issue
of the mass ratio and its derivation solely from the photometry can usually be problematic for the
detached binaries, as has been previously stated elsewhere in the literature (see e.g.
). For this reason, we usually fixed its value to 1.0 for most of our
detached binary systems. In addition, the synchronicity parameters were kept fixed at 1.0, while
the albedo and gravity brightening coefficients were also kept fixed at their suggested values,
according to the temperature.
The input temperature values for the primary components were taken from the latest Gaia DR3
catalogue <cit.>, using the values from the GSP-Phot
pipeline. These values are summarised in Table 1.
We proceeded mostly step-by-step according to the following scheme. At first, using all
photometric data from the TESS satellite, we identified the more pronounced eclipsing pair (named
pair A), built its phased light curve, and attempted to carry out a preliminary fitting of this LC
shape. After subtracting this LC, we obtained a preliminary photometry for only pair B. Doing the
preliminary analysis also for this pair B, we returned to the complete photometry to re-analyse
the LC of pair A with the residual data. Afterwards, we returned to pair B again and made a better
fit to B. This iterative approach was repeated several times. When subtracting both A and B light
curves, the complete residuals should not show any evident phase-dependent variations. This is our
proof that the LC shapes are satisfactory.
However, as a second step, we needed to derive the individual times of eclipses (using our AFP
method, as described in ). This method is using the light curves from
various databases or surveys, but phased with linear ephemerides over a longer time interval. This
has to be done due to the availability of only sparse photometry and combining the data over more
epochs makes the phased light curve adequately covered for the method. We usually used the time
interval of one year, but this can be arbitrarily changed with respect to the number of data
points in each interval. With such an approach, we derived a more suitable orbital period than the
one originally assumed. With the new period, the whole analysis and the LC modelling process
should be repeated again, and then iteratively several more times.
At first, we started with the simplest assumption, namely, that both A and B binaries contribute
the same to the total light curve (i.e. using a third light, L_3 = 50%); thus, this parameter
could also be kept free for fitting. The sum of both luminosities for both pairs, L_A + L_B,
should give 1.00 (or 100%) but, in reality, the total luminosity is usually over 1.0, simply due
to the additional light from nearby sources caused by large TESS pixels.
§ RESULTS
In this section, we focus on the individual systems presented in our analysis in greater detail.
§.§ ASASSN-V J102911.57-522413.6
The first star in our study, ASASSN-V J102911.57-522413.6, is located in the constellation of
Velorum. This star was discovered as a variable one using the photometric data from the ASAS-SN
survey <cit.>. However, these authors only detected the
shorter and more pronounced period of 0.573 days from their data. Additionally, the star was later
classified as doubly eclipsing in our recent study <cit.>, with another
periodicity of about 3.79 days found in the data. Both of the binaries show EA-type light curve,
indicating detached orbits. This is more evident for pair B, which is obviously eccentric having
the secondary eclipse located at the 0.58 phase from the primary one. No other detailed study of
the star has been published since then; also, its spectroscopic analysis is missing. The only
available Gaia spectrum <cit.> is of poor quality, mainly due to its low
brightness (as a 13th magnitude star).
We chose the best available light curve for the system, namely, the TESS one from sector 37. These
data were analysed in PHOEBE, resulting in the parameters given in Table <ref>, and
the fits of the LC for both A and B pairs are given in Figure <ref>. As we can see,
there is an asymmetry of the LC of pair A, showing that near the quadratures of the orbit, the LC
has different brightness levels. Such a behavior is usually explained by a presence of surface
spots. We have not tried to fit them, we only wish to point out this peculiar aspect of the
system. Due to the significant out-of-eclipse variations, the mass ratio was also fitted as free
parameter for pair A, while for the very detached system B, this was kept fixed during the fitting
process. We found quite a significant eccentricity for pair B, namely, about e = 0.127.
For studying the long-term evolution of the orbital periods for both pairs we collected the
available photometric data for the system spanning several years into the past (mainly the ASAS-SN
survey). However, some of the data do not show any photometric variation at all (e.g. ASAS or old
digitalized photographic plates via the DASCH project), due to their scatter and overly low
amplitude in the photometric variations of both pairs. These data were complemented with our new
dedicated observations of the system from two observing sites. At first, we observed the star on
La Silla using the Danish 1.54-m telescope equipped with a CCD camera and using standard R
filter. Secondly, on several other nights, the target was also observed using the 30-cm FRAM
telescope located in Argentina at the Pierre Auger Observatory. We used these data only for
deriving the times of eclipses for detecting the ETV in both pairs. The result of this fitting is
plotted in Figure <ref>. However, our current data are still too limited and cover
only part of the orbit. Hence, we decided to use only a more simple description of the orbit using
a zero eccentricity. Therefore, we needed to expand the time base of our data and/or obtain new
observations of a much better quality. Only then would it be possible to derive its correct
eccentricity as well. The fitting has led to period of about nine years. We also find that the
amplitude of ETV for pair B is slightly higher than that of pair A. This result is in good
agreement with our LC modelling, showing that the more dominant in luminosity is the pair A,
indicating its higher mass as well.
Using the parallax of the system given by <cit.> of 0.711 mas (i.e. a distance
of about 1400 pc) the predicted angular separation of the two binaries should be about 10 mas.
Unfortunately, with such a small angular distance, we cannot hope to resolve the double – even
with speckle interferometry technique, it is not possible due to the low brightness of the star.
Thus, only new upcoming observations in the next few years will be able to more definitively prove
our hypothesis and derive the eccentricity of the orbit as well.
§.§ V1037 Her
The second star included into our set is V1037 Her, which was discovered to be a variable star by
<cit.>, but who reported an incorrect period for it. It was later found that
the dominant variation shows a periodicity of 0.78758 days, and showing a significant light curve
of a detached eclipsing type with deep eclipses of about 0.3 magnitudes. Over the last two
decades, the star was observed several times by amateur astronomers, who derived a few precise
times of eclipses of this binary. Quite surprisingly, nobody noticed that there is also an
additional variation with a period of about 5.8 days, also showing rather deep eclipse of more
than 0.1 magnitude (despite the fact that the eclipse is clearly visible on older photometric data
from various surveys as well; see more details below).
As in the previous case, for the LC modelling, we used TESS data. In Fig. <ref>,
our final fit of both LCs is shown, based on TESS satellite data. As we can see, the
secondary eclipses of pair B are only shallow, but clearly detectable in the TESS data. On the
other hand, pair A shows significant asymmetry, which is attributed to stellar spots. We used
the hypothesis of one spot located on the primary component, which is cooler than the surrounding
areas of the surface. Thus, we were able to describe the asymmetry relatively well. Its parameters, as derived from
sector 52, are as follows: spot latitude – 0.96 rad, longitude – 5.04 rad, radius – 0.16 rad, and temperature ratio –
0.66. However, when we compare these parameters with those of sector 25, we find that such a spot
cannot describe properly the LC and the asymmetry is different. Hence, an evolution of spot
parameters has to be taken into account when properly modelling the system. Both pairs are
circular, but pair B has much more distant components, while pair A has the components
much closer to each other, also showing a non-negligible ellipsoidal variation. Pair A is also
the dominant pair in the system concerning its luminosity level. The final LC parameters are given
in Table <ref>.
Besides the TESS photometry there were also many data points collected from various older
photometric databases and surveys, where both eclipsing periods are detectable as well. These are
mainly: the ASAS-SN survey, SuperWASP survey <cit.>, and the ATLAS survey
<cit.>. Apart from these publicly available data, we also observed the star
over several nights with our own means. Our data were obtained in three different observatories:
the first is the private observatory of R.U. in Jílové u Prahy, CZ, using small 34-mm and
150-mm aperture telescopes and a standard R filter; the second was carried out by M.M., using
the FRAM telescope of 25-cm diameter located in La Palma and using a standard R filter;
finally, the third is the private observatory of Z.H. in Veltěže u Loun, CZ. All these
data were then used in Fig. <ref>, where we can see the period variations of both
A and B pairs on their mutual orbit. Despite a quite longer period of about 26 years, which is
still not adequately covered by the data, we can confidently state that the system is bound and
both pairs orbit around each other. High eccentricity causes the rapid period variation that is
clearly visible near the periastron. From our orbital parameters and the distance to the system
from Gaia (d = 374 pc), we can compute the predicted angular separation of the two doubles on the
sky. This resulted in about 65 mas, which is much more favourable than for the previous case;
however, this is still at the edge of possible detection for such a star (i.e. of about 12
magnitudes).
§.§ WISE J181904.2+241243
The next system studied here is WISE J181904.2+241243, which was found to be an eclipsing binary
candidate by the ATLAS survey <cit.>. It shows a rather contact dominant pair
A, with a periodicity of about 0.367 days and about 0.3 mag deep eclipses; in addition, there is
also a weaker variation for pair B, which shows a slightly more detached configuration with period
of about 0.419 days. The star was not detected as doubly eclipsing before and this is the first
publication showing its true nature. WISE J181904.2+241243 definitely exhibits the most contact
configuration (of both pairs) among the studied systems.
For the LC modelling, we used the TESS data from sector 40, where both eclipsing pairs are clearly
visible. The pair A resulted in a contact configuration of a W UMa-type light curve. We also tried
to fit the mass ratio of this pair, since the ellipsoidal variations are large. The fraction of
luminosities indicates that the dominant is the pair A. However, pair B shows a slight
asymmetry of its light curve, which is moreover changing in between different sectors of data
making the whole analysis of its period changes more challenging. The results of the LC fitting
are given in Figs.<ref>, while its parameters are given in Table <ref>.
Concerning its period changes, we primarily used the TESS photometry for deriving the eclipse
times of both pairs. In addition, other databases such as ATLAS, ASAS-SN, and SuperWASP were
used. In particular, for pair A, these also provide us with quite precise estimates of times of
eclipses; however, for pair B, due to its shallow eclipses, only more scattered datapoints were
derived. Figure <ref> displays the result of the combined ETV fitting of both
pairs, while its parameters are given in Table <ref>. As we can see, pair B resulted in
lower amplitude of ETV, indicating a higher mass than that of pair A. This is in contradiction to
the resulting luminosity ratios from the LC analysis. We have no clear explanation for such a
discrepancy. The most recent observations of both pairs should indicate some deviation from our
predicted light-time effect fit, so there is still a possibility that the overall orbit is
different than our presented solution. However, with the available data points, we were not able
to find a more suitable solution (even using the quadratic ephemerides). Further observations in
the future should resolve this question.
§.§ V2894 Cyg
Another system considered in our study is V2894 Cyg, which is the brightest star in our sample.
For this reason, it was also classified as a B5 star by <cit.>. There was
sometimes a problematic identification of the star with a close-by star HD 227245, which has a
similar brightness. However, we believe that the B5 spectral type belongs to our target. The star
was first detected as doubly eclipsing by <cit.>, who gave its both eclipsing
periods of 1.306 and 2.575 days. The star is also probably a member of the open galactic cluster
named [FSR2007] 0198 <cit.>.
We studied the star mainly using the TESS data. The LCs of both pairs were analysed resulting in
the following picture. Pair A shows evident asymmetries, likely caused by some photospheric spots.
And moreover, the pair is also eccentric and showing significant apsidal motion. Such a movement
of apsides is visible even during the duration of the TESS data. Hence, this effect has to be
taken into account for a proper analysis. The results from our LC fitting are given in Table
<ref>, while the final plots are shown in Fig. <ref>.
Due to an insufficient amount of data, we only applied a simplified approach of the circular outer
orbit, which is given in a plot in Fig. <ref> and resulting parameters are given
in Table <ref>. From the long-term variation of times of eclipses there resulted that the
apsidal motion has the period of about 46 years and the eccentricity of the orbit is 0.155. Such a
value is in great agreement with the 0.161 as resulted from the LC fitting. The plot shown in Fig.
<ref> for pair A is plotted after subtraction of the long term apsidal motion,
only showing the contribution of the mutual orbit around a common barycenter. Mutual movement
shows that the orbit is relatively long, with a period of more than 27 years; hence, only part of
it is covered by the observations. New data in the coming years are therefore needed for a better
derivation of its orbital parameters.
§.§ NSVS 5725040
The next stellar system we studied is NSVS 5725040, which has not previously been mentioned as a
doubly eclipsing system before; hence, it can be considered as a novel discovery. It is also the
second brightest star in our sample and due to its brightness, it also has a spectrum taken by the
LAMOST survey <cit.>, which shows obviously the two-component feature. It was
also being classified as a B1V star based on <cit.>. But no other more
detailed information about this system is available.
Using the TESS photometry we arrived at the following picture. At first, the more dominant pair A
shows an evidently eccentric orbit and also the fast apsidal motion. With an eccentricity of about
0.14 and orbital period of 1.79 days, it is among the systems with the highest eccentricity with
respect to stars with periods shorter than 2 days. The much shallower pair B has an orbital period
that is even shorter than pair A, but showing a circular orbit. All of the LC parameters are given
in Table <ref>, while the shape of the LC fit can be seen in our Fig.
<ref>. It is obvious that pair A is very dominant in the luminosity levels,
which also causes pair B to only have such a low photometric amplitude of its eclipses, at the
level of only 0.01 mag.
Concerning the period changes and the ETV analysis, it was found quite problematic to analyze it
properly due to the very shallow eclipses of pair B, as this pair is almost undetectable in other
photometric data beyond TESS. There is also some older data going back to the beginning of the
20th century, but these are almost useless for our whole analysis. However, the period changes of
both pairs are pretty visible only using the TESS data due to the short period of the mutual A-B
orbit. The result of this analysis was that the eccentric orbit of pair A shows significant
apsidal motion with period of about 23.9 years only, which makes it one of the fastest apsidal
motion systems detected so far. Besides, the eccentricity of orbit A resulted in 0.140, in perfect
agreement with the eccentricity derived from the LC fitting (resulted in 0.141). The final fitting
of available data is shown in Fig. <ref>. One remarkable consequence of our
result is the finding that the ratio of periods p_AB^2/p_A resulted in about 1400 years. This
ratio shows us the level of these dynamical long-period interactions of the two inner and outer
orbits. For example, the period of nodal precession (if any, only in case of non-coplanar systems)
should be on the order of the same duration, that is, we should expect some variations of
inclination of pair A over a century of precise observations.
§.§ WISE J210230.8+610816
The star WISE J210230.8+610816 was detected as a doubly eclipsing system in our recent study
<cit.>. The dominant pair A has the period of about 1.84 days and the
amplitude of about 0.2 mag, showing obviously the light curve of detached configuration; while the
shallower pair B has the amplitude of about 0.06 mag only and with its period of about 0.57 days
shows a contact binary type light curve. It is the faintest star in our sample and, thus, no other
detailed study of the star has been published yet. Also, the spectrum of the star is not available
(besides the relatively poor Gaia spectrum).
We used mainly the TESS photometry to derive its basic physical and orbital properties. The star
was observed in six sectors and both sets of eclipses are clearly visible there. In our Fig.
<ref>, we give the final fits of both pairs as the result of the PHOEBE
program. Its parameters are given in Table <ref>, where one can clearly see that the pair A
is the dominant one. For this reason, the variations of pair B are quite problematic to detect in
other photometric databases and surveys. Pair B seems to be almost in contact, while pair A is a
detached one.
The study of period variation of both eclipsing pairs resulted in the following figure. The mutual
orbit is eccentric, having a period of only about 2.2 years, making it the fastest quadruple in
our sample. Amplitude of the pair B seems to be about 2 times larger, hence, also its mass should
be half that one of the pair A. Our result is in very good agreement with an independent finding
by <cit.>, who analysed the Gaia and TESS data, giving the outer period value
of 843.88 ± 22.78 days. However, their study has not taken into account its quadruple nature,
since they did not detect the eclipses of the pair B in their data. Moreover, our finding is
supported with much larger dataset spanning a longer time interval than that used in
Surprisingly, the ratio of periods here, p_AB^2/p_A,
resulted in even lower value than for the previous system, namely, of only about 980 years. Hence,
we can hope detecting some inclination changes during the upcoming decades, in the case of
non-coplanar orbits.
§.§ ZTF J220518.78+592642.1
The last system in our compilation is ZTF J220518.78+592642.1, which was first detected as a
doubly eclipsing system independently by <cit.> and
<cit.>. The very dominant pair A has about a 2.8 d period and deep eclipses of
about 0.2 mag, while pair B is only of about 0.03 mag deep and having an approximately 3.3-d
orbital period. Besides the one Gaia spectrum, clearly showing a double-line profile, there are no
other spectra available for this star. With its inferred distance of about 7 kpc, this is the most
distant object among our stars.
Analysing its TESS light curves, we obtained the following results (also shown in Table
<ref>, where are the parameters of the fit). Both the light curves are shown in Fig.
<ref>. As we can see, we deal with very dominating pair A (concerning its
luminosity), while pair B contributes only a few percent. Moreover, from the shape of the LC of
pair B, we see that it contains two rather different components (their temperatures). The pair A
also shows a minimal asymmetry of its LC at quadratures.
Collecting all available older measurements of the star, we carried out the long-term period
variation analysis. To detect pair B is problematic due to its shallow eclipses. However, our
data clearly shows the period variations and both the ETVs behaving in the opposite manner. The
final parameters are given in Table <ref>, while the fit is plotted in Fig.
<ref>. We find that pair B has higher amplitude for its ETV, but lower than we
would generally expect from the luminosity ratio as derived from the LC fitting. We leave this as
a open question, since the coverage of the ETV variation for pair B is still very poor and the
amplitude would also be much larger than our current fit shows.
§ DISCUSSION AND CONCLUSIONS
We performed the first detailed analysis of seven new multiple systems that have been proven to be
bound quadruples of 2+2 architecture. We were able to detect the period variations of both pairs
of these systems thanks to the collection of photometric observations spanning back several
decades. Despite the fact that their outer mutual periods are sometimes too long, meaning that our
data are still insufficient to fully describe the orbit, our analysis definitively shows the ETV
variations of both A and B pairs behaving in the opposite manner.
Besides the ETVs of both pairs, we also detected the significant apsidal motions in several
binaries as a byproduct. All of these fits are shown in Figure <ref>, where we plotted only
the apsidal motion fits following the subtraction of the mutual movement of the individual pairs
around their barycenters.
At present, the total number of doubly eclipsing systems showing definitely two sets of eclipses
is more than 350 (but still only a small fraction of them have been proven to be real bound 2+2
quadruples). As in our previous studies, we plotted the period ratios of all these systems. We
completed the known doubly eclipsing systems with other 2+2 quadruples from the literature and
from the MSC catalog <cit.>. This can be seen at the upper part of the Figure
<ref>, where the new systems from the present study are shown alongside other
candidates (with two eclipsing periods) that are still unpublished (awaiting publication in the
near future). The number of systems in our sample has reached 450 in total. However, unlike our
previous studies, we also divided the set into two subgroups. At first, the systems of earlier
spectral type, having mostly radiative atmospheres, with T_eff>7000K, and/or having a Gaia
photometric index of (B_p-R_p)<0.45. Also, on the contrary, the systems having convective
atmospheres of later spectral types, with T_eff<7000K. These two subplots are also shown in
the lower plots. Surprisingly, the suspicious peak near the 3:2 mean motion resonance is
preferably seen only in the one showing hotter stars with T_eff>7000K. The question of whether
this indicates some deeper physical reason or it is just a coincidence resulting from the small
number statistics remains open. Finally, we recommend that new systems should be added to extend
the statistics and sample in both these groups.
We do thank the ASAS, SuperWASP, ZTF, ASAS-SN, and TESS teams for making all of the observations easily public available.
The research of P.Z. was supported by the project Cooperatio - Physics of Charles University in Prague.
We are also grateful to the ESO team at the La Silla Observatory for their help in maintaining and operating the Danish telescope.
This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia
Data Processing and Analysis Consortium (DPAC,
<https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided
by national institutions, in particular the institutions participating in the Gaia
Multilateral Agreement. We would also like to thank the Pierre Auger Collaboration for the use of
its facilities. The operation of the robotic telescope FRAM is supported by the grant of the
Ministry of Education of the Czech Republic LM2023032. The data calibration and analysis related
to the FRAM telescope is supported by the Ministry of Education of the Czech Republic MSMT-CR
LTT18004, MSMT/EU funds CZ.02.1.01/0.0/0.0/16_013/0001402 and
CZ.02.1.01/0.0/0.0/18_046/0016010. This work is supported by MEYS (Czech Republic) under the
projects MEYS LM2023047, LTT17006 and EU/MEYS CZ.02.1.01/0.0/0.0/16_013/0001403 and
CZ.02.1.01/0.0/0.0/18_046/0016007.
The research of P.Z., J.K., and J.M. was also supported by the project Cooperatio - Physics of Charles University in Prague.
The observations by Z.H. in Veltěže were obtained with a CCD camera kindly borrowed by the Variable Star and Exoplanet Section of the Czech Astronomical Society.
This research made use of Lightkurve, a Python package for TESS data analysis <cit.>.
This research has made use of the SIMBAD and VIZIER databases, operated at CDS, Strasbourg, France and of NASA Astrophysics Data System Bibliographic Services.
[Aab et al.(2021)]2021JInst..16P6027A Aab, A., Abreu, P., Aglietta, M., et al. 2021, Journal of Instrumentation, 16, P06027. doi:10.1088/1748-0221/16/06/P06027
[Akerlof et al.(2000)]2000AJ....119.1901A Akerlof, C., Amrose, S., Balsano, R., et al. 2000, , 119, 1901. doi:10.1086/301321
[Annear(1953)]1953ApJ...118...77A Annear, P. R. 1953, , 118, 77. doi:10.1086/145728
[Borkovits et al.(2018)]2018MNRAS.478.5135B Borkovits, T., Albrecht, S., Rappaport, S., et al. 2018, , 478, 5135. doi:10.1093/mnras/sty1386
[Borkovits et al.(2021)]2021MNRAS.503.3759B Borkovits, T., Rappaport, S. A., Maxted, P. F. L., et al. 2021, , 503, 3759. doi:10.1093/mnras/stab621
[Borkovits(2022)]2022Galax..10....9B Borkovits, T. 2022, Galaxies, 10, 9. doi:10.3390/galaxies10010009
[Cantat-Gaudin & Anders(2020)]2020A A...633A..99C Cantat-Gaudin, T. & Anders, F. 2020, , 633, A99. doi:10.1051/0004-6361/201936691
[Czavalinga et al.(2023)]2023A A...670A..75C Czavalinga, D. R., Mitnyan, T., Rappaport, S. A., et al. 2023, , 670, A75. doi:10.1051/0004-6361/202245300
[Eisner et al.(2021)]2021MNRAS.501.4669E Eisner, N. L., Barragán, O., Lintott, C., et al. 2021, , 501, 4669. doi:10.1093/mnras/staa3739
[Gaia Collaboration et al.(2016)]2016A A...595A...1G Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, , 595, A1
[Gaia Collaboration et al.(2022)]2022arXiv220800211G Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2022, arXiv:2208.00211. doi:10.48550/arXiv.2208.00211
[Heinze et al.(2018)]2018AJ....156..241H Heinze, A. N., Tonry, J. L., Denneau, L., et al. 2018, , 156, 241. doi:10.3847/1538-3881/aae47f
[Kochanek et al.(2017)]2017PASP..129j4502K Kochanek, C. S., Shappee, B. J., Stanek, K. Z., et al. 2017, , 129, 104502
[Kostov et al.(2021)]2021ApJ...917...93K Kostov, V. B., Powell, B. P., Torres, G., et al. 2021, , 917, 93. doi:10.3847/1538-4357/ac04ad
[Kostov et al.(2022)]2022ApJS..259...66K Kostov, V. B., Powell, B. P., Rappaport, S. A., et al. 2022, , 259, 66. doi:10.3847/1538-4365/ac5458
[Lasker et al.(2008)]2008AJ....136..735L Lasker, B. M., Lattanzi, M. G., McLean, B. J., et al. 2008, , 136, 735. doi:10.1088/0004-6256/136/2/735
[Lee et al.(2008)]2008MNRAS.389.1630L Lee, C.-U., Kim, S.-L., Lee, J. W., et al. 2008, , 389, 1630
[Lightkurve Collaboration et al.(2018)]2018ascl.soft12013L Lightkurve Collaboration, Cardoso, J. V. de M., Hedges, C., et al. 2018, Astrophysics Source Code Library. ascl:1812.013
[Liu et al.(2019)]2019ApJS..241...32L Liu, Z., Cui, W., Liu, C., et al. 2019, , 241, 32. doi:10.3847/1538-4365/ab0a0d
[Luo et al.(2015)]2015RAA....15.1095L Luo, A.-L., Zhao, Y.-H., Zhao, G., et al. 2015, Research in Astronomy and Astrophysics, 15, 1095. doi:10.1088/1674-4527/15/8/002
[Pollacco et al.(2006)]2006PASP..118.1407P Pollacco, D. L., Skillen, I., Collier Cameron, A., et al. 2006, , 118, 1407
[Powell et al.(2021)]2021AJ....161..162P Powell, B. P., Kostov, V. B., Rappaport, S. A., et al. 2021, , 161, 162. doi:10.3847/1538-3881/abddb5
[Prouza et al.(2019)]2019ICRC...36..769P Prouza, M., Ebr, J., Mandat, D., et al. 2019, 36th International Cosmic Ray Conference (ICRC2019), 36, 769. doi:10.22323/1.358.0769
[Prša & Zwitter (2005)]2005ApJ...628..426P Prša, A., & Zwitter, T. 2005, ApJ, 628, 426
[Ricker et al.(2015)]2015JATIS...1a4003R Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003. doi:10.1117/1.JATIS.1.1.014003
[Shappee et al.(2014)]2014ApJ...788...48S Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, , 788, 48
[Southworth(2012)]2012ocpd.conf...51S Southworth, J. 2012, Orbital Couples: Pas de Deux in the Solar System and the Milky Way, 51
[Terrell & Wilson(2005)]2005Ap SS.296..221T Terrell, D., & Wilson, R. E. 2005, , 296, 221
[Tokovinin(2018)]2018ApJS..235....6T Tokovinin, A. 2018, , 235, 6. doi:10.3847/1538-4365/aaa1a5
[Tokovinin(2021)]2021Univ....7..352T Tokovinin, A. 2021, Universe, 7, 352. doi:10.3390/universe7090352
[Wilson & Devinney(1971)]1971ApJ...166..605W Wilson, R. E., & Devinney, E. J. 1971, ApJ, 166, 605
[Zacharias et al.(2013)]2013AJ....145...44Z Zacharias, N., Finch, C. T., Girard, T. M., et al. 2013, , 145, 44. doi:10.1088/0004-6256/145/2/44
[Zasche et al.(2014)]2014A A...572A..71Z Zasche, P., Wolf, M., Vraštil, J., et al. 2014, , 572, A71
[Zasche et al.(2019)]2019A A...630A.128Z Zasche, P., Vokrouhlický, D., Wolf, M., et al. 2019, , 630, A128. doi:10.1051/0004-6361/201936328
[Zasche et al.(2020)]2020A A...642A..63Z Zasche, P., Henzl, Z., Lehmann, H., et al. 2020, , 642, A63. doi:10.1051/0004-6361/202038656
[Zasche et al.(2022)]2022A A...659A...8Z Zasche, P., Henzl, Z., & Kára, J. 2022, , 659, A8. doi:10.1051/0004-6361/202142771
[Zasche et al.(2022)]2022A A...664A..96Z Zasche, P., Henzl, Z., & Mašek, M. 2022, , 664, A96. doi:10.1051/0004-6361/202243723
|
http://arxiv.org/abs/2306.02737v1
|
20230605093300
|
Comparative analysis of the existence and uniqueness conditions of parameter estimation in paired comparison models
|
[
"László Gyarmati",
"Éva Orbán-Mihálykó",
"Csaba Mihálykó"
] |
math.ST
|
[
"math.ST",
"math.OC",
"stat.TH"
] |
arabic
X-ray polarimetry and spectroscopy of the neutron star low-mass X-ray binary GX 9+9: an in-depth study with and
F. Ursini<ref> R. Farinelli<ref> A. Gnarini<ref> J. Poutanen<ref> S. Bianchi<ref> F. Capitanio<ref> A. Di Marco<ref> S. Fabiani<ref> F. La Monaca<ref> C. Malacaria<ref> G. Matt<ref> R. Mikušincová<ref> M. Cocchi<ref> P. Kaaret<ref> J. J. E. Kajava<ref>,<ref> M. Pilia<ref> W. Zhang<ref>
I. Agudo <ref>
L. A. Antonelli <ref>,<ref>
M. Bachetti <ref>
L. Baldini <ref>, <ref>
W. H. Baumgartner <ref>
R. Bellazzini <ref>
S. D. Bongiorno <ref>
R. Bonino <ref>,<ref>
A. Brez <ref>
N. Bucciantini
<ref>,<ref>,<ref>
S. Castellano <ref>
E. Cavazzuti <ref>
C.-T. Chen <ref>
S. Ciprini <ref>,<ref>
E. Costa <ref>
A. De Rosa <ref>
E. Del Monte <ref>
L. Di Gesu <ref>
N. Di Lalla <ref>
I. Donnarumma <ref>
V. Doroshenko <ref>
M. Dovčiak <ref>
S. R. Ehlert <ref>
T. Enoto <ref>
Y. Evangelista <ref>
R. Ferrazzoli <ref>
J. A. Garcia <ref>
S. Gunji<ref>
K. Hayashida <ref>Deceased.
J. Heyl <ref>
W. Iwakiri <ref>
S. G. Jorstad <ref>,<ref>
V. Karas <ref>
F. Kislat <ref>
T. Kitaguchi <ref>
J. J. Kolodziejczak <ref>
H. Krawczynski <ref>
L. Latronico <ref>
I. Liodakis <ref>
S. Maldera <ref>
A. Manfreda <ref>
F. Marin <ref>
A. Marinucci <ref>
A. P. Marscher <ref>
H. L. Marshall <ref>
F. Massaro <ref>,<ref>
I. Mitsuishi <ref>
T. Mizuno <ref>
F. Muleri <ref>
M. Negro <ref>,<ref>,<ref>
C.-Y. Ng <ref>
S. L. O'Dell <ref>
N. Omodei <ref>
C. Oppedisano <ref>
A. Papitto <ref>
G. G. Pavlov <ref>
A. L. Peirson <ref>
M. Perri <ref>,<ref>
M. Pesce-Rollins <ref>
P.-O. Petrucci <ref>
M. Pilia <ref>
A. Possenti <ref>
S. Puccetti <ref>
B. D. Ramsey <ref>
J. Rankin <ref>
A. Ratheesh <ref>
O. J. Roberts <ref>
R. W. Romani <ref>
C. Sgrò <ref>
P. Slane <ref>
P. Soffitta <ref>
G. Spandre <ref>
D. A. Swartz <ref>
T. Tamagawa <ref>
F. Tavecchio <ref>
R. Taverna <ref>
Y. Tawara <ref>
A. F. Tennant <ref>
N. E. Thomas <ref>
F. Tombesi <ref>,<ref>,<ref>
A. Trois <ref>
S. S. Tsygankov <ref>
R. Turolla <ref>,<ref>
J. Vink <ref>
M. C. Weisskopf <ref>
K. Wu <ref>
F. Xie <ref>,<ref>
S. Zane <ref>
Accepted... Received...
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
^1 Department of Mathematics, University of Pannonia, 8200 Veszprém, Hungary
^* Corresponding author, Department of Mathematics, University of Pannonia, 8200 Veszprém, Egyetem u. 10., Hungary; Email: [email protected], +3688624000/6109
Email: [email protected],
[email protected], [email protected]
In this paper paired comparison models with stochastic background are investigated. We focus on the models which allow three options for choice and the parameters are estimated by maximum likelihood method. The existence and uniqueness of the estimator is a key issue of the evaluation. In the case of two options, a necessary and sufficient condition is given by Ford in the Bradley-Terry model. We generalize this statement for the set of strictly log-concave distribution. Although in the case of three options necessary and sufficient condition is not known, there are two different sufficient conditions which are formulated in the literature. In this paper we generalize them, moreover we compare these conditions. Their capacities to indicate the existence of the maximum are analyzed by a large number of computer simulations. These simulations support that the new condition indicates the existence of the maximum much more frequently then the previously known ones.
Keywords: Bradley-Terry model; maximum likelihood estimation; paired comparison; sufficient conditions; Thurstone model
§ INTRODUCTION
Comparisons in pairs are frequently used in ranking and rating problems. They are mainly applied when scaling is very uncertain, but comparing the objects to the others can guarantee more definite results. The area of the possible applications is extremely large, some examples are the followings: education <cit.>, sports <cit.>, information retrieval <cit.>, energy supply <cit.>,
financial sector <cit.>, management <cit.>.
The most popular method is AHP (Analytic Hierarchy Process) elaborated by Saaty <cit.> and developed by others, see for example the detailed literature in <cit.>. The method has lots of advantages: more than two options, several methods for evaluation, opportunity of incomplete comparisons, simple condition for the uniqueness of the evaluation <cit.>, possibility of multi-level decision <cit.>, the concept of consistency <cit.>. Nevertheless, due to the lack of stochastic background, the usual statistical tools, like confidence intervals, testing hypotheses are out of the possibilities.
Fundamentally different models of paired comparisons are Thurstone motivated stochastic models. The basic concept is the idea of latent random variables, presented in <cit.>. Thurstone assumed Gauss distributed latent random variables and allowed two options in decisions, "worse" and "better". The method was modified: Gauss distribution was replaced by logistic distribution in <cit.> and the model is called Bradley-Terry model (BTM). One of its main advantage are the simple mathematical formulae. Thurstone applied least squares method for parameter estimation, BTM applies maximum likelihood estimation and the not-complicated formulae allow quick numerical methods for solving optimization problems.
The existence and uniqueness of the optimizer is a key issue in the case of ML estimations; necessary and sufficient condition for it is proved in <cit.>.
The model was generalized for three options ("worse", "equal" and "better") in <cit.> for Gauss distribution and in <cit.> for logistic distribution. The latter paper applied maximum likelihood parameter estimation. Davidson made further modifications in the model concerning ties in <cit.>. For more than 3 options we can find generalization in <cit.> in the case of Bradley-Terry model, and in <cit.> in the case of Gauss distribution. In <cit.> it was proved that the models require the same conditions in order to be able to evaluate the data uniquely in the case of a broad set of cumulative distribution functions for the latent random variables: the strictly log-concave property of the probability density function is the crucial point of the uniqueness, while the assurance of the existence is hidden in the data structure. We mention that Gauss distribution and logistic distribution are included in the set of distributions having strictly log-concave probability density function. Note that, due to the probabilistic background, the Thurstone motivated models have the opportunity of building in the homefield or first-mover advantage <cit.>, testing hypotheses <cit.>, making forecasts <cit.>, therefore, they are worth investigating.
In <cit.>, the author analyzes the structure of the comparisons allowing both two and three options in choice. The author emphasizes that not only the structure of the graph made from the compared pairs but the results of the comparisons affect the existence of MLE. He makes some data perturbations in the cases where there are comparisons, but some results do not occur. By these perturbations, the zero data values become positive, and these positive value guarantee the strongly connected property of the directed graph constructed by the wins. But these perturbations modify the data structures, therefore, it would be better to avoid them.
In <cit.>, the authors investigate BTM with two options and provide estimations for the probability of the existence of MLE. The authors turn to the condition of Ford to check whether MLE exists uniquely or not. As condition of Ford is necessary and sufficient condition, it indicates explicitly whether the MLE works or not. But in the case of other distributions and/or more than two options these investigations could not be performed due to the lack of necessary and sufficient condition for the existence and uniqueness of MLE.
To continue their research, it would be conducive to have (necessary and) sufficient condition for the existence and uniqueness. To the best knowledge of the authors, there is no such theorem in the research literature, only two sufficient conditions is known. In this paper we compare the known conditions, we formulate their generalization,
and we prove it. Then, we compare the applicability of the different conditions from the following point of view: how often and for what kind of parameters are they able to indicate the existence and uniqueness of MLE. We make large numbers of computer simulations and we use them to answer these questions.
The paper is organised as follows: In Section <ref> the investigated model is described. In Section <ref> we present new conditions under which the existence and uniqueness is fulfilled. The proof can be found in Appendix A. In Section <ref> the simulation results concerning the applicability are presented. Finally a short summary is given.
§ THE INVESTIGATED MODEL
Let the number of the different objects to evaluate be denoted by n, and
let the objects be referred to as 1,2,...,n. We want to evaluate them on the basis of the opinions of some persons called observers.
Let us denote the latent random variable belonging to the i^th object by
ξ_i, i=1,2,...,n. Let the number of the options in a choice be s=3, namely "worse", "equal" and "better", denoted by C_1, C_2 and C_3. We split the set of the real
numbers ℝ into 3 intervals, which have no common elements. Each option in judgment corresponds to an interval in the
real line, the correspondence is noted by the same index. If the judgment
between the i^th and j^th objects is the option C_k, then we
assume that the difference ξ_i-ξ_j of the latent random variables
ξ_i and ξ_j is in the interval I_k, k=1,2,3.
The intervals are determined by their initial points and endpoints, which are -∞, -d, d and ∞, I_1=(-∞,-d), I_2=[-d,d] and I_3=(d,∞).
The above intervals together with the corresponding options are presented in Figure <ref>.
We can write the differences of the latent random variables in the following form:
ξ_i-ξ_j=m_i-m_j+η_i,j, i=1,...,n, j=1,...,n, i≠ j.
Now
E(ξ_i)=m_i
and η _i,j are identically distributed random variables with expectation 0. The ranking of the expectations determines the ranking of the objects and the differences in their values give information concerning the differences of the strengths. We want to estimate the expectations and the value of the border of "equal" (d) on the basis of the data. For that we use maximum likelihood estimation.
The probabilities of the
events can be computed on the basis of the assumptions concerning the distributions of η_i,j as follows:
P(ξ _i-ξ _j∈ I_1)=P(ξ _i-ξ _j<-d)=F(-d-(m_i-m_j))
P(ξ _i-ξ _j∈ I_2)=P(-d<=ξ _i-ξ _j<=d)=F(d-(m_i-m_j))-F(-d-(m_i-m_j))
P(ξ _i-ξ _j∈ I_3)=P(d<ξ _i-ξ _j)=1-F(d-(m_i-m_j))
where F is the (common) cumulative distribution function (c.d.f) of η_i,j.
Let the number of observers be r. The judgment produced by the u^th
observer (u=1,2,...,r) concerning the comparison of the i^th and the
j^th objects is encoded by the elements of a 4 dimensional matrix which has
only 0 and 1 coordinates depending on the choice of the respondent. The third indices correspond to the options in choices, k=1,2,3 are for judgments "worse", "equal", and "better", respectively. The matrix of all judgments be X, having 4 dimensions,
i=1,2,...,n, j=1,2,...,n, k=1, 2, 3, u=1,2,...,r and
[20pt]0pt0pt
X_i,j,k,u={[ 1,if the opinion of the u^thobserver in pursuance; of the comparison of the i^th and the j^th objects is
C_k; 1l0, otherwise ].
[20pt]0pt0pt
Let X_i,i,k,u=0. Of course, due to the symmetry, X_i,j,k,u=X_j,i,4-k,u. It expresses that if the i^th object is "better" than the j^th object, then the j^th object is "worse" than the i^th object, according to the judgment of the u^th respondent.
Let A_i,j,k=∑_u=1^rX_i,j,k,u be the number of observations
C_k in pursuance of the comparison of the i^th and the j^th
objects and let A denote the three dimensional matrix containing the
elements A_i,j,k. Of course, A_i,j,k=A_j,i,4-k.
The likelihood function expresses the probability of the
sample in the function of the parameters. Assuming independent judgments, the likelihood function is
L(X|m_1,m_2,...,m_n,d)=
∏_k=1^3
∏_i=1^n-1
∏_j=i+1^n
( P(ξ _i-ξ _j∈ I_k)) ^A_i,j,k
which has to be maximized in m=(m_1,...,m_n) and 0<d.
One can realize that the likelihood function depends on the differences of the parameters m_i, therefore, one of them can be fixed.
§ CONDITIONS FOR THE EXISTENCE AND UNIQUENESS
In <cit.>, the author presents a necessary and sufficient condition for the existence and uniqueness of MLE, if there are only two options for choice and F, the c.d.f. of η_i,j, is the logistic c.d.f.. The condition is the following: for arbitrary non-empty partition of the objects, S and S, there exists at least one element of S, which is "better" than an element of S, and vice versa.
In <cit.>, the author states that this condition supplemented with the condition "there is at least one tie ("equal")" is enough for having a unique maximizer in a modified Bradley-Terry model. The theorem assumes logistic distribution, its proof uses this special form, therefore, it is valid only for the investigated special model. Now we prove it for a broad set of c.d.f.'s. We require the following properties: F is a c.d.f. with 0<F(x)<1, F is three times continuously differentiable, its probability density function f is symmetric and the logarithm of f is a strictly concave function in ℝ. Gauss and logistic distribution belong to this set, together with lots of others. Let us denote the set of these c.d.f.-s by 𝔽.
First we state the following generalization of Ford's theorem:
Let F∈𝔽 and suppose that there are only two options in choice. Fix the value of the parameter m_1=0. The necessary and sufficient condition for the existence and uniqueness of MLE is the following: for arbitrary non-empty partition of the objects, S and S, there exists at least one element of S, which is "better" than an element of S, and vice versa.
The proof of sufficiency relies on the argumentation of Theorem <ref> omitting variable d. The used steps are (ST3), (ST5), and (ST6) in Appendix A. In the last step, the strictly concave property of logL can be concluded from the theory of logarithmic concave measures <cit.>. The necessity is obvious: if there would be a partition without "better" from one subset to another, then each element of this subset would be "worse" than the elements of the complement, but the measure of "worse" could not be estimated. The likelihood function would be monotone increasing, consequently, the maximum would not be reached.
Returning to the case of three options, we formulate conditions of Davidson in the followings:
There exists an index pair (i_1,j_1) for which 0<A_i_1,j_1,2.
For any non-empty partition of the objects S and S, there exists at least two index pairs (i_2,j_2) and (i_3,j_3)
i_2,i_3∈ S, j_2,j_3∈S
for which
0<A_i_2,j_2,3
and
0<A_i_3,j_3,1.
Condition DC <ref> expresses that there is a judgment "equal". Condition DC <ref> coincides with the condition of Ford in <cit.> in the case of two options. It expresses that there is at least one object in both subsets which is "better" than an object in the complement.
Let F∈𝔽. If conditions DC <ref> and DC <ref> hold, then, fixing m_1=0, the likelihood function (<ref>) attains its maximal value and its argument is unique.
Theorem <ref> is the consequence of a more general statement, Theorem <ref>, which will be proved in Appendix <ref>.
Now we turn to another set of conditions which guarantees the existence and uniqueness of MLE. These conditions will be abbreviated by the initial letters MC.
There is at least one index pair (i_1,j_1) for which
0<A_i_1,j_1,2
holds.
There is at least one index pair (i_2,j_2) for which
0<A_i_2,j_2,1 and 0<A_i_2,j_2,3.
Let us define the graph G^(M) as follows: the nodes are the objects to be compared. There is an edge between two nodes i and j, if
0<A_i,j,2 or (0<A_i,j,1 and 0<A_i,j,3)
hold.
Graph G^(M) is connected.
<cit.>
Let F∈𝔽. If conditions MC <ref>, MC <ref> and MC <ref> hold, then, after fixing m_1=0, the likelihood function (<ref>) attains its maximal value and the argument of the maximum is unique.
To clear the relationship between conditions DC <ref>, DC <ref> and MC <ref>, MC <ref>, MC <ref> we present two examples.
In Example <ref>, DC <ref>, DC <ref> are satisfied but MC <ref> and MC <ref> are not. In Example <ref>, DC <ref> is not satisfied but MC <ref>, MC <ref>, MC <ref> are. These examples expose that the sets of conditions DC and MC do not cover each other. Moreover, they support that MLE may exist uniquely even if DC <ref> and DC <ref> or MC <ref>, MC <ref> and MC <ref> do not hold. Therefore, we can see that neither conditions DC nor conditions MC are necessary conditions.
Let n=3 and A_1,2,2=1, A_1,2,3=1, A_2,3,3=1, A_1,3,1=1 (see Figure <ref>). Now both DC <ref> and DC <ref> hold, but MC <ref> does not.
Let n=3 and A_1,2,1=1, A_1,2,3=1, A_2,3,2=1 (see Figure <ref>). Now one can easily check that MC <ref>, MC <ref> and MC <ref> hold but DC <ref> does not.
The above theorems can be generalized. Let us introduce the following set of conditions denoted by SC:
There is at least one index pair (i_1,j_1) for which
0<A_i_1,j_1,2 holds.
Let us introduce a graph belonging to the results of the comparisons as follows: let DG^(SC) be a directed graph, the nodes are the objects, and there is a directed edge from i to j if there is an opinion according to which i is "better" than j, that is 0<A_i,j,3.
Now we can formulate the following conditions:
There is a cycle in the directed graph DG^(SC).
For any non-empty partition of the objects S and S, there exists at least two (not necessarily different) index pairs (i_2,j_2) and (i_3,j_3)
i_2,i_3∈ S, j_2,j_3∈S for which
0<A_i_2,j_2,3
and 0<A_i_3,j_3,1,
or there exists an index pair (i_4,j_4)
i_4∈ S and j_4∈S
for which
0<A_i_4,j_4,2.
It is easy to see that condition SC <ref> is more general than condition MC <ref> and condition SC <ref> is more general than condition DC <ref>. Condition SC <ref> expresses that any subset and its complement are interconnected by an opinion "better" or an opinion "equal". Here Condition DC <ref> is replaced by a more general condition: next to "better" the opinion "equal" can also be appropriate judgment for connection.
To analyse the relationships between the sets of conditions DC, MC and SC we can recognize that
(A) DC <ref>, MC <ref> and SC <ref> coincide.
(B) If DC <ref> holds, then so does SC <ref> and SC <ref>.
(C) If MC <ref> holds, so does SC <ref>.
(D) If MC <ref> holds, so does SC <ref>.
These together present that conditions SC <ref>, SC <ref>, and SC <ref>
are the generalization of the conditions DC and MC.
To show that SC is really a more general set of conditions we present Example <ref>.
Let n=4, A_1,2,3=1, A_2,3,3=1, A_1,3,1=1 and A_1,4,2=1 (see Figure <ref>). In this case neither condition DC <ref> nor MC <ref> hold, but SC <ref>, SC <ref> and SC <ref> do.
Now we state the following theorem.
Let F∈𝔽. If conditions SC <ref>, SC <ref> and SC <ref> hold, then, after fixing m_1=0, the likelihood function (<ref>) attains its maximum value and its argument is unique.
The proof of Theorem <ref> can be found in Appendix <ref>.
We note that Theorem <ref> is a straightforward consequence of Theorem <ref>.
Unfortunately, conditions SC <ref>, SC <ref> and SC <ref> are not necessary conditions. One can prove that in the case of Example <ref> there exists a unique maximizer of function (<ref>) but SC <ref> does not hold.
Let n=3, A_1,2,3=1, A_2,3,3=1 and A_1,3,2=1 (see Figure <ref>).
§ COMPARISONS OF THE EFFICIENCY OF THE CONDITIONS
In this section, we investigate in some special situations which sets of conditions (conditions DC <ref>, DC <ref>; conditions MC <ref>, MC <ref>, MC <ref>; conditions SC <ref>, SC <ref>, SC <ref>) are fulfilled, i.e. are able to detect the existence and the uniqueness of the maximizer.
From the applications' perspective, there are such cases when the strengths of the objects to rank are close to each other and when they differ very much. On the other hand, there are such cases when the judgment "equal" is frequent, and such cases when it is rare. Referring to sports: in football and in chess the result draw comes up often, but in handball rarely.
The most general set of conditions is the set SC. These conditions are fulfilled most frequently from the three sets of conditions. Nevertheless, it is interesting to what extent it is more applicable than the other two sets of conditions. For that we made a large amount of computer simulations in the case of different parameter settings, and we investigated, how frequently the conditions are satisfied and how frequently we experience that the maximum exists.
We used Monte-Carlo simulation for the investigations. We fixed the differences between two expectations and the value of parameter d. This means that in our cases m=(0,h,2h,...,(n-1)h). We investigated 8 objects, and we generated randomly the pairs between which the comparisons exist. The number of comparisons was 8, 16, 32, 64. The results of the comparisons were also generated randomly, according to the probabilities (<ref>), (<ref>) and (<ref>).
In these random cases we checked whether conditions DC, MC, and SC are satisfied or not. Moreover we performed the numerical optimizations and we investigated whether the maximal value exists. We used 4 parameter ensembles, called situations, which are shown in Table <ref>.
In the presented situations, if the value of h is small then the strengths of the objects are close to each other. It implies that many "better-worse" pairs could be formed during the simulations. On the other hand, if the value of h is large, the strengths of the objects are far from each other, then we can expect only few "better-worse" pairs, but a great amount of "better" judgment. In terms of the number of "equal" judgments, if d is large then lots of "equal" judgment could be formed during the simulations, while only few, when d is small. The set of conditions DC can apply well the judgments "better", but it require only a single "equal" judgment. However, the set of conditions MC can use the judgments "equal" for connections, and the pairs "better-worse" judgments. Conditions SC do not require pairs, only judgments "better", in one circle. We recall that a single "better-worse" pair is appropriate as a circle. The judgments "equal" are well-applicable for this set of conditions, too.
Table <ref> summarizes the situations with the presumable ratios of the "equal" judgments and "better-worse" pairs. In addition, Tables <ref>, <ref>, <ref> and <ref> contains the numerical results of the simulations.
The order of the situations in terms of the number of the existence of the maximal values is decreasing. Column MAX contains the number of the cases when the maximum exists. Columns DC/MAX, MC/MAX and SC/MAX present the ratios of the cases when the set of conditions DC, MC, SC hold, respectively. We can see that increasing the number of comparisons, the number of such cases when the maximal value exists and the ratios increase. We draw the attention to the fact that the values of the columns SC/MAX are less than 1 on several occasions. This detects again that SC is not a necessary condition.
We performed 10^8 simulations per situation.
Table <ref> presents the results in Situation I. In this case we can see the DC/MAX rate is lower than the MC/MAX rate. We could predict it because there are lots of "equal" judgment. The SC/MAX rate is high even for 16 comparisons. In the case of 16 comparisons SC is 3.5 times better than MC and over 100 times better than DC.
Table <ref> presents the results of Situation II. In this case, the rate of "equal" is low, which does not favour the set of conditions MC. This is also reflected in the ratio MC/MAX, which is much worse than the ratio DC/MAX. The set of conditions SC still stands out among the other conditions.
Table <ref> shows the results of Situation III. Here the maximum values exist more rarely than in the previous two cases. In this case the number of "equal" decisions is high, while the number of "better-worse" pairs is low, which is favorable for the set of conditions MC and disadvantageous for the set of conditions DC, as we can see in Table <ref>. It can also be seen that none of the methods are as good as in the previous tables in terms of detecting the existence of the maximum. SC stands out again from the other two sets of conditions. Nevertheless, SC is able to show the existence of the maximum only in 73% in the case of 32 comparisons, compared to 99% in the previous situations. The set of conditions DC is almost useless, it is useful only in the cases 3.3% even if the number of comparisons equals 64. The set of conditions MC method is slowly catching up and getting better, but for small numbers of comparisons (8, 16, 32) it is far from the much better SC criteria.
Table <ref> presents the results in Situation IV. In the latter case, the numbers of "equal" choices and "better-worse" pairs are small, which is unfavorable MC, principally. In this situation, SC detects the existence of the maximal value exceptionally well. DC evinces them less fine, but it works better than MC. Nevertheless, for small numbers of comparisons, they are orders of magnitude weaker than SC.
In all situations we have found that when we make few comparisons, SC is superior to the other conditions. As we make more and more comparisons, both other methods get better and better, but they are always worse than SC. The clear conclusion from the four tables is that the set of conditions SC is much more effective than the others, especially for small numbers of comparisons.
§ SUMMARY
In this paper conditions guaranteeing the existence and uniqueness of the maximum likelihood parameter estimation are investigated. The case of general log-concave probability density function is studied. If two options are allowed, the usually applied Ford's condition is generalized from the logistic distribution to a wide set of distributions. This condition is necessary and sufficient condition. In the case of three options in decision, necessary and sufficient condition has not been proved, but there are two different sufficient conditions. We generalized them. A new set of conditions is proved which guarantees the existence and uniqueness of the maximizer. Moreover, we compare the conditions by the help of computer simulations and we have experienced that the set of the new conditions indicates the existence and uniqueness much more frequently, than the previously known conditions. Consequently, it provides more effective methods for such research which was preformed by Yan <cit.> and Bong and Rinaldo <cit.>.
The research includes the possibility of further developments. It would be desirable to set up the necessary and sufficient condition of the existence and uniqueness of the maximizer for the case of three options in choices, and simulations may help these findings. Further research is necessary to investigate the case of more than 3 options. These would be the subject of a next paper.
apalike
tocsectionReferences
Appendix
§ PROOF OF THEOREM <REF>
First we mention that instead of (<ref>), its logarithm, the log-likelihood function
log L(X|m_1,m_2,...,m_n,d)=
∑_k=1^3
∑_i=1^n-1
∑_j=i+1^n
A_i,j,k· log P(ξ_i- ξ_j∈ I_k))
that is
log L(X|m_1,m_2,...,m_n,d)=
0.5 ·∑_k=1^3
∑_i=1^n
∑_j=1^n
A_i,j,k· log P(ξ_i- ξ_j∈ I_k))
is maximized under the conditions 0<d and m_1=0. We prove that (<ref>) attains its maximal value under the conditions 0<d and m_1=0 and the argument of the maximal value is unique.
The steps of the proof are denoted by (ST1), (ST2), (ST3), (ST4), (ST5) and (ST6).
Computing the value of the log-likelihood function at m=(0,0,0,...,0), d=1 and denoting this by logL_0, the maximum has to be seek in such regions where the values of (<ref>) are at least logL_0. Moreover, we note that every term of the sum in (<ref>) is negative (or zero if A_i,j,k=0), consequently, the maximum can not be attained in those regions where any term is under logL_0. By investigating the limits of the terms, we will check which parameters can be restricted into a closed bounded regions. The proof of the existence relies on the Weierstass theorem: we restrict the range of d and m_2,...,m_n to bounded closed sets where the continuous function (<ref>) has maximal value. For that, we proof some lemmas.
(ST1) The first step is to find a positive lower bound for the variable d.
Condition SC <ref> guarantees that the maximum can be attained if ε≤ d with an appropriate value of 0<ε.
SC <ref> guarantees that there exists an index pair i,j for which 0<A_i,j,2. Now,
A_i,j,2· log(F(d-(m_i-m_j))-F(-d-(m_i-m_j)))⟶-∞ if d⟶0.
If d⟶0, the arguments of the c.d.f. tend to the same value, their difference tends to zero. Consequently, its logarithm, with a positive multiplier, tends to minus infinity.
As 0.5 · A_i,j,2· log(F(d-(m_i-m_j))-F(-d-(m_i-m_j)))<logL_0, if d<ε, we can restrict the region of d to the subset ε≤ d with an appropriate value of 0<ε, while seeking the maximum.
[30pt]0pt0pt
(ST2) The next step is to find an upper bound for the variable d.
If 0<A_i,j,3 then there exists an upper bound K_i,j for which it holds that the maximum can be attained in the region d-(m_i-m_j)≤ K_i,j.
It is easy to see that if 0<A_i,j,3, then
A_i,j,3· log(1-F(d-(m_i-m_j)))⟶ -∞ supposing d-(m_i-m_j)⟶∞.
Consequently, there exists a value K_i,j with the following property: if K_i,j<d-(m_i-m_j) then 0.5 · A_i,j,3· log(1-F(d-(m_i-m_j))) < logL_0, the maximum has to be seek on the region d-(m_i-m_j) ≤ K_i,j. It means that the maximum can be reached only in such regions where d-(m_i-m_j) has an upper bound.
Condition SC <ref> guarantees that there is a cycle (i_1,i_2,...,i_h,i_1) with directed edges from i_k to i_k+1 k=1,2,...,h and from i_h to i_1 and these directed edges arise from 0<A_i_k,i_k+1,3 k=1,...,h and 0<A_i_h,i_1,3. We can assume that i_1=1.
Lemma <ref> implies that
d-(m_i_k-m_i_k+1)≤ K_k,k+1 and
d-(m_i_h-m_i_1)≤ K_h,1.
Using a common upper bound K (K_k,k+1≤ K and K_h,1≤ K), moreover, summing the inequalities in (<ref>), we get that h· d≤∑_k=1^h K. This proves that it is enough to seek the maximum in a closed bounded set of d.
[30pt]0pt0pt
(ST3) Now let us turn to the upper and lower bounds of the parameters m_i.
Let us defined a graph G^(SC) as follows: the vertices are the objects. There is a directed edge from i to j if 0<A_i,j,3 (i is "better" than j according at least one opinion). There is a directed edge from i to j and also from j to i if 0<A_i,j,2 (they are "equal" according at least one opinion).
We will use the following well-known statement. Condition SC <ref> is equivalent to the following condition: between any pair of objects i and j there is a directed path in G^(SC) from one to the other.
If 0<A_i,j,3 and m_i ≤ K_i, then there exists an upper bound of m_j denoted by K_j, with the following property: 0.5 · A_i,j,3· log(1-F(d-(m_i-m_j)))<logL_0 if K_j<m_j, that is the maximum can be attained if m_j ≤ K_j.
Recalling (<ref>), we can conclude that d-(m_i-m_j) ≤ K_i,j. As m_i ≤ K_i and 0<d<K^(d), we get that m_j ≤ K_i,j+K_i.
We can interpret Lemma <ref>, that the property "having upper bound" spreads in the direction of the edge "better" defined by 0<A_i,j,3.
If 0<A_i,j,1 (there is at least one opinion according to i is "worse" than j) and the inequality -B_i ≤ m_i holds, then there exists a lower bound of m_j, denoted by -B_j, with the following property: 0.5 · A_i,j,1· logF(-d-(m_i-m_j))<logL_0 if m_j<-B_j, that is the maximum can be attained if -B_j ≤ m_j.
The statement is the straightforward consequence of the following: if 0<A_i,j,1, then
logF(-d-(m_i-m_j))⟶ -∞ supposing -d-(m_i-m_j)⟶ -∞.
We can interpret Lemma <ref>, that the property "having lower bound" spreads along the opinion "worse".
[10pt]0pt0pt
(ST4) Finally, we investigate the effect of the existence of a opinion "equal" for the property boundedness.
Suppose that the parameter d is bounded. If 0<A_i,j,2 and m_i ≤ U_i, there exists an upper bound U_j for which if U_j<m_j then (<ref>) < logL_0. It means, that the maximum has to be sought in the region m_j ≤ U_j.
If 0<A_i,j,2 and -H_i ≤ m_i, there exists a lower bound -H_j for which if m_j<-H_j then (<ref>) < logL_0. It means, that the maximum has to be sought in the region -H_j≤ m_j.
It is easy to see that
lim_-d-(m_i-m_j)→ -∞A_i,j,2· log(F(d-(m_i-m_j))-F(-d-(m_i-m_j)))=-∞,
and
lim_d-(m_i-m_j)→∞A_i,j,2· log(F(d-(m_i-m_j))-F(-d-(m_i-m_j)))=-∞.
Consequently, the maximum has to be in the following region:
-H_i,j≤ -d-(m_i-m_j)
and
d-(m_i-m_j) ≤ B_i,j,
respectively, with an appropriate bound -H_i,j and B_i,j.
As ϵ≤ d ≤ K^(d), m_i ≤ B_i implies
m_j ≤ B_i,j+B_i and -H_i,j-H_i ≤ m_j.
We can summarize Lemma <ref>, that a both properties "having upper bound" and "having lower bound" spread with opinion "equal". It behaves as a "better" and a "worse" opinion, altogether.
[10pt]0pt0pt
(ST5)
Now we can prove that it is enough to seek the maximum on closed bounded set of every parameter m_i. Starting out of m_1=0, there exists a directed path from 1 to i in G^(SC), along the edges defined by 0<A_i,j,3 and 0<A_i,j,2. Walking along this path, and recalling that m_1=0, the property "having upper bound" spreads from 1 to object i. The directed path from the object i to 1 is a reverse directed path from 1 to i, and the property "having lower bound" of the object 1 spreads to i for every index i, consequently the expectations can be restricted into a closed bounded set. The maximal value of (<ref>) can only be in these regions. As (<ref>) is a continuous function, the Weierstrass theorem implies the existence of the maximal value.
[30pt]0pt0pt
(ST6) The uniqueness of the argument at the maximal value is the consequence of the strictly concave property of the logarithm of the p.d.f.. Lemma 6 in <cit.>) implies the strictly concave property of the function (<ref>) in d-(m_i-m_j) and -d-(m_i-m_j) for every index pair (i,j) for which 0<A_i,j,2 and in d-(m_i-m_j) if 0<A_i,j,3.
Walking along the circle DG^(SC) defined by SC <ref> and summing the arguments, we get that the function (<ref>) is the strictly concave function of the parameter d.
Now let us turn to the strictly concave property of (<ref>) in the parameters m_i, i=2,3,...,n. There is a directed path from 1 to i defined by 0<A_i,j,2 and 0<A_i,j,3 in graph G^(SC). Walking along it, we can conclude the strictly concave property of (<ref>) in d-(m_i_k-m_i_k+1). Summing the arguments of the terms belonging to the path, we get, that the log-likelihood function is strictly concave in l· d +m_i, where 0<l is the length of the path. This fact and the strictly concave property in d guarantee the strictly concave property in m_i. We get that function (<ref>) is strictly concave in its every variable m_i and d, therefore, the argument of the maximum has to be unique.
|
http://arxiv.org/abs/2306.06454v1
|
20230610142617
|
Variation of optical and infrared properties of galaxies with their surface brightness
|
[
"Junais",
"K. Małek",
"S. Boissier",
"W. J. Pearson",
"A. Pollo",
"A. Boselli",
"M. Boquien",
"D. Donevski",
"T. Goto",
"M. Hamed",
"S. J. Kim",
"J. Koda",
"H. Matsuhara",
"G. Riccio",
"M. Romano"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Variation of dust luminosity and attenuation in galaxies with optical surface brightness
Junais et al.
National Centre for Nuclear Research, Pasteura 7, PL-02-093 Warsaw, Poland
[email protected]
Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France
Department of Physics, National Tsing Hua University, 101, Section 2. Kuang-Fu Road, Hsinchu, 30013, Taiwan (R.O.C.)
Institute of Astronomy, National Tsing Hua University, 101, Section 2. Kuang-Fu Road, Hsinchu, 30013, Taiwan (R.O.C.)
Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800, USA
Department of Space and Astronautical Science, The Graduate University for Advanced Studies, SOKENDAI, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan
Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan
INAF - Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, I-35122, Padova, Italy
Centro de Astronomá (CITEVA), Universidad de Antofagasta, Avenida Angamos 601, Antofagasta, Chile
SISSA, Via Bonomea 265, 34136 Trieste, Italy
Although it is recognized now that low surface brightness galaxies (LSBs) contribute to a large fraction of the number density of galaxies, many of their properties are still poorly known. LSBs are often considered as “dust poor”, with a very low amount of dust, based on a few studies.
We use, for the first time, a large sample of LSBs and high surface brightness galaxies (HSBs) with deep observational data to study the variation of stellar and dust properties as a function of the surface brightness/surface mass density.
Our sample consists of 1631 galaxies that are optically selected (with ugrizy-bands) at z<0.1 from the North Ecliptic Pole (NEP) wide field. We use the large multi-wavelength set of ancillary data in this field, ranging from UV to FIR. We measured the optical size and the surface brightness of the targets, and analyzed their spectral energy distribution using the fitting code.
Based on the measured average r-band surface brightness (), our sample consists of 1003 LSBs ( >23 ) and 628 HSBs ( ≤23 ). We found that the specific star formation rate and specific infrared luminosity (total infrared luminosity per stellar mass) remain mostly flat as a function of surface brightness for both LSBs and HSBs that are star-forming but decline steeply for the quiescent galaxies. The majority of LSBs in our sample have negligible dust attenuation ( <0.1 mag), except for about 4% of them that show significant attenuation with a mean of 0.8 mag. We found that these LSBs with a significant attenuation also have a high r-band mass-to-light ratio (M/L_r>3 M_⊙/L_⊙), making them outliers from the linear relation of surface brightness and stellar mass surface density.
These outlier LSBs also show similarity to the extreme giant LSBs from the literature, indicating a possibly higher dust attenuation in giant LSBs as well.
This work provides a large catalog of LSBs and HSBs with detailed measurements of their several optical and infrared physical properties.
Our results suggest that the dust content of LSBs is more varied than previously thought, with some of them having significant attenuation making them fainter than their intrinsic value. With these results, we will be able to make predictions on the dust content of the population of LSBs and how the presence of dust will affect their observations from current/upcoming surveys like JWST and LSST.
Variation of optical and infrared properties of galaxies with their surface brightness
Junais1,
K. Małek1,2,
S. Boissier2,
W. J. Pearson1,
A. Pollo1,
A. Boselli2,
M. Boquien9,
D. Donevski1,10,
T. Goto3,4,
M. Hamed1,
S. J. Kim4,
J. Koda 5,
H. Matsuhara6,7,
G. Riccio1,
M. Romano1,8
Received 29 March 2023 / Accepted 09 June 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In recent years, advances in technology have allowed astronomers to study different types of galaxies in great detail, bringing new interest in low surface brightness galaxies. To have a comprehensive view of galaxy evolution, we have to consider
high surface brightness galaxies (HSBs) and low surface brightness galaxies (LSBs). HSBs are the “typical” bright galaxies that have been well-studied in the literature, but LSBs, which are much fainter, have only recently become more accessible for detailed studies.
LSBs are generally defined as diffuse galaxies that are fainter than the typical night sky surface brightness level of ∼23 in the B-band <cit.>. However, we should note that there is no clear-cut definition for LSBs existing in the literature, and it varies among different works. Therefore, in this work, we consider LSBs as galaxies with an average r-band surface brightness >23 and HSBs with ≤23 , following similar definitions adopted in previous works <cit.>.
LSBs span a wide range of sizes, masses, and morphologies, from the most massive giant low surface brightness galaxies (GLSBs) down to the more common dwarf systems <cit.>. It is estimated that LSBs make up a significant fraction of more than 50% of the total number density of the galaxies in the universe <cit.>, and about 10% of the baryonic mass budget <cit.>. Such an abundance of
LSBs could steepen the faint-end slope of the galaxy stellar mass and luminosity function <cit.>.
Although LSBs are generally found to be gas-rich, their gas surface densities are usually about a factor 3 lower than for the HSBs <cit.>.
As star formation in galaxies is linked to their gas surface density <cit.>, this directly affects their ability to form stars, resulting in LSBs having a low stellar mass surface density as well. Therefore, LSBs are a perfect laboratory for studying star formation activity in low-density regimes <cit.>.
Due to the very low densities and star formation, LSBs are also generally considered to have a very low amount of dust. Their low metallicities also imply that their dust-to-gas ratios should be lower than those of their HSB counterparts <cit.>. <cit.> showed that LSB disks are effectively transparent without any extinction where multiple distant galaxies were observed through their disks. Moreover, most of the observations of LSBs at infrared wavelengths
resulted in non-detections <cit.>, indicating either a very weak or non-detectable dust emission.
Nevertheless, we cannot necessarily conclude that the entire population of LSBs consisting of a wide range of galaxy types is dust poor. <cit.> found that LSBs selected from the SDSS survey span a wide range in their dust attenuation measured using the Balmer decrement ( in the range of 0 to 1 mag, with a median value of ∼0.4 mag). This indicates that not all LSBs are dust poor. However, since surveys like SDSS are very shallow and incomplete beyond >23 , only the brighter end of the LSB population is observed by them and lack information about the remaining bulk of the faintest LSBs that are missed <cit.>. In another work, <cit.> showed that the specific dust mass (dust to stellar mass ratio) of local galaxies from the Herschel Reference survey (HRS; ) increases towards fainter galaxies. This yet again indicates that LSBs could have dust masses comparable with HSBs of similar stellar mass. It is likely that the dust in LSBs is distributed very diffusely, similar to their stellar population and gas content, making it extremely hard to detect <cit.>.
Currently, most studies on dust/infrared properties of LSBs were done using either very small samples <cit.> or shallow data <cit.>, which may be not sufficient to make a general conclusion on the large population of LSBs. We need to have a large statistical sample of galaxies at different surface brightness levels to properly understand how these properties change between LSBs and HSBs. In this work, we aim to do this by collecting a large sample of both LSBs and HSBs with deep data to constrain their optical/infrared properties and quantify how the presence of dust (if any) affects our observations of them. Such a work will be particularly significant in the context of current/upcoming observational facilities, such as the Large Synoptic Survey Telescope (LSST; ) and the James Webb Space Telescope (JWST; ), where a large number of LSBs will be observed.
This paper is structured as follows: Section <ref> describes the data and the sample used in this work.
Section <ref> introduces the comparison sample we use from the literature. Section <ref> describes our spectral energy distribution fitting procedure. The results of our analysis are presented in Sect. <ref>, and a global discussion is given in Sect. <ref>. We conclude in Sect. <ref>.
Throughout this work, we adopt a <cit.> initial mass function (IMF), and a ΛCDM cosmology with H_0 = 70 km s^-1Mpc^-1, Ω_M = 0.27 and Ω_Λ = 0.73. All the magnitudes given in this paper are in the AB system.
§ DATA AND SAMPLES
§.§ Main sample
In this work, we use the large set of multi-wavelength data ranging from UV to FIR wavelengths available for the North Ecliptic Pole (NEP) wide field, covering an area of ∼5.4 (see for a detailed description of the available data).
This also includes deep optical data from the Subaru Hyper Suprime-Cam (HSC; ) and CFHT Megcam/Megaprime[The CFHT Megcam/Megaprime observations of the NEP field covers only a total area of ∼3.6 , compared to the ∼5.4 covered by the HSC observations.] <cit.>, which will be used as a basis for our sample selection discussed in Sect. <ref>. The NEP wide field has a very deep coverage in optical with a 5σ detection limit of 25.4, 28.6, 27.3, 26.7, 26.0, and 25.6 mag in the ugrizy-bands, respectively[Note that at this depth, many local bright galaxies are saturated in the HSC observations and were removed as flagged sources with bad pixels <cit.>.]. This is very close to the 5σ depth of the upcoming LSST survey in similar bands <cit.>. In both cases, the depth of the data is suited to explore the properties of galaxies as a function of surface brightness, which is the goal of this work.
Moreover, the NEP field is also well suitable for the study of dust and attenuation within galaxies, due to the extensive coverage of this field in the infrared wavelengths (e.g., AKARI, WISE, Spitzer, Herschel; ) as well as very low foreground Galactic extinction along the line of sight of the NEP field.
§.§.§ Sample selection
Our sample selection was done based on the HSC grizy-bands and CFHT u-band data <cit.>. Only the galaxies with a 5σ detection in all these six bands were included in our sample. The u-band, with its short wavelength, is more sensitive toward dust attenuation. Therefore, the choice of including a u-band detection facilitates secure dust attenuation estimates for our sample, which we intend to do in this work. Moreover, a selection in the ugrizy also mimics the upcoming LSST-like observations in the same bands, where there will be a vast discovery space for LSBs.
We also applied an arbitrary selection in redshift, to include only local galaxies with z<0.1. We impose this limit since we aim to study the properties of galaxies as a function of surface brightness, and the cosmological dimming would make us lose the LSB galaxies at high-z.
For this purpose, we use the photometric redshifts provided by <cit.>, or the spectroscopic redshifts, whenever available (see for more details on the available spectroscopic data). The photometric redshifts from <cit.> were computed with the Le Phare code <cit.>, using the ugrizy-bands. Moreover, the Spitzer IRAC 1 (3.6 μm) and IRAC 2 (4.5 μm) bands were also included in the photometric redshift estimation, whenever available. The photometric redshifts attain the accuracy of σ_z_ p=0.06[The photometric redshift accuracy σ_z_ p from <cit.> is defined as the normalized median absolute deviation, where σ_z_ p = 1.48×median(|z_p - z_s|/1+z_s), with z_p and z_s being the photometric and spectroscopic redshifts, respectively.] and a catastrophic outlier rate of 8.6% <cit.>. With the above selection procedure based on optical detection and the redshift cut, our sample now contains 1950 galaxies. Among them, only 66 galaxies have spectroscopic redshifts.
We verified that a strict selection based on the ugrizy bands as discussed above, does not introduce any bias towards bluer/redder galaxies in our sample. To perform this test, we looked at an alternate sample selection, based only on the HSC grizy-bands, in the same area as our u-band observations. Such a selection increases our sample size by around ∼190 galaxies (among them about 90 galaxies are LSBs) as the HSC grizy observations are 2 to 3 orders of magnitude deeper than the CFHT u-band. However, we found that such a sample has a very similar distribution in their optical colors as our initial ugrizy selected sample (mean g-r color of 0.53 mag for both the samples). This indicates that the inclusion of the u-band does not introduce a bias in our selection. Therefore, from hereupon, we chose to continue with our initial ugrizy and redshift selected sample of 1950 galaxies.
§.§.§ Morphological fitting
In order to obtain the effective surface brightness and radius of each galaxy, we performed a morphological fitting procedure using the AutoProf tool <cit.>. AutoProf is an efficient tool to capture the full radial surface brightness light profile of a galaxy from its image using a non-parametric approach, unlike the parametric fitting tools like Galfit <cit.>, which do not always capture the total light from a galaxy. AutoProf is also well-suited for low surface brightness science, where it can extract about two orders of magnitude fainter isophotes from an image than any other conventional tool <cit.>.
The surface brightness profile extraction of our sample was done on the HSC r-band images. Although the g-band is the deepest among our sample, the choice of the r-band (which is the second deepest) is motivated by the fact that r-band is a better tracer of the stellar mass distribution in galaxies than the g-band <cit.>.
Figure <ref> shows an example of the surface brightness profile obtained for a galaxy.
Similarly, we extracted the profiles for the majority of the galaxies in our sample (1743 out of 1950 galaxies). The remaining sources have failed/flagged profile fits. Therefore, hereafter we exclude from our sample all the sources without a reliable morphological fit, which leaves
1743 galaxies. We integrated each surface brightness profile until its last measured radius to estimate the total light from each galaxy, the corresponding effective radius (half-light radius; ), and the average surface brightness within the effective radius (μ̅_e). From Fig. <ref>, we can clearly see that the radial surface brightness profiles we obtained using AutoProf reach well beyond the effective radius of the galaxy to about 4 times and also ∼2 deeper than the typical sky level (a similar trend is found for our full sample), which is ideal to probe low surface brightness galaxies. The distribution of the r-band and for our full sample is given in Fig. <ref>. Our sample at this stage consists of 1041 LSBs and 702 HSBs (although such a distinction is based on an arbitrary definition as discussed in Sect. <ref>). The LSBs have a median and of 23.8 and 1.9 kpc, respectively. Whereas the HSBs are brighter and slightly larger in size with a median and of 22.2 and 2.2 kpc, respectively. In terms of the redshift, both the LSBs and HSBs have a similar distribution, with a median value of about 0.08. The r-band absolute magnitudes (M_r) of the two sub-samples show a clear difference, with the LSBs fainter than the HSBs, as expected from their selection, with a median M_r of -15.9 mag and -17.9 mag, respectively.
We also made a comparison of our and μ̅_e estimates with that of <cit.>, who made a Sérsic profile fitting of the NEP galaxies in the same band using the statmorph tool <cit.>. We found that, in general, our values are in agreement with <cit.>, with a mean difference in of 0.01±0.23 dex and -0.02±0.76 for the .
§.§.§ Cross-matching with multi-wavelength catalogs
After the initial sample selection and their morphological fitting, we cross-matched our optically selected sample with all the available multi-wavelength data in hand. For the NEP field, other than the optical data from HSC and CFHT, we have ancillary data available from GALEX (FUV and NUV bands; ), AKARI (N2, N3, N4, S7, S9W, S11, L15, L18W, and L24 bands; ), CFHT/WIRCam (Y, J, and K_s bands; ), KPNO/FLAMINGOS (J and H bands; ), Spitzer/IRAC (band 1 and 2; ), WISE (band 1 to 4; ) and Herschel PACS/SPIRE (100 μm, 160 μm, 250 μm, 350 μm and 500 μm bands; ). A detailed description of the data is given in <cit.>. The multi-band photometry obtained from the cross-matching of these catalogs will be used in the spectral energy distribution (SED) fitting procedure discussed in Sect. <ref>. The cross-matching was done following <cit.>, where a 3σ positional offsets in the RA/Dec. coordinates corresponding to each dataset with respect to the HSC coordinates were used as the cross-matching radii. For GALEX, AKARI, WIRCam, FLAMINGOS, IRAC, WISE, PACS, and SPIRE, we used a cross-matching radius of 1.5, 1.5, 0.5, 0.65, 0.58, 0.7, 2.75, and 8.44, respectively. Figure <ref> shows the distribution of galaxies with counterparts in each dataset. About 62% of galaxies in the sample (1086 out of 1743 sources) have at least one counterpart outside the ugrizy optical range.
We also compared our sample with the band-merged catalog of <cit.> who identified HSC counterparts for the AKARI-detected sources in NEP. Only 532 galaxies of our sample overlap with the <cit.> catalog, indicating that the remaining of our sources do not have any AKARI counterparts in NIR or MIR. Moreover, ∼85% of our sample does not have any detection in the mid-infrared (MIR) and far-infrared (FIR) regime (in the 7 μm to 500 μm wavelength range) as shown in Fig. <ref>. However, since we aim to study the IR properties of our sample, it is crucial to have observational constraints in the MIR and FIR range. We have deep observations from AKARI and Herschel/SPIRE in this wavelength range, covering the entire field we study. Therefore, for the galaxies without any detection in this range,
we use the detection limits from these observations as their flux upper limits[We used the AKARI and Herschel/SPIRE upper limits given in Table 1 of , as they have the deepest coverage in the entire NEP Wide field for the MIR and FIR range.]. The 5σ detection limits of the AKARI S7, S9W, S11, L15, L18W, L24, and SPIRE 250 μm, 350 μm and 500 μm bands, are 0.058 mJy, 0.067 mJy, 0.094 mJy, 0.13 mJy, 0.12 mJy, 0.27 mJy, 9 mJy, 7.5 mJy, and 10.8 mJy, respectively <cit.>. These upper limits are used in the SED fitting procedure discussed in Sect. <ref>.
§.§ Comparison sample
We use the Herschel Reference Survey (HRS; ) sample for the comparison of the results obtained in this work. HRS is a volume-limited sample (15≤ D ≤ 25 Mpc) of 322 galaxies consisting of both early-type and late-type galaxies (62 early-type galaxies with K-band magnitude K_s≤8.7 mag and 260 late-type galaxies with K_s≤12 mag). The HRS sample is selected in such a way as to include only the high galactic latitude (b>+55) sources with low Galactic extinction (similar to the NEP sample). The HRS sample covers a large range of galaxy properties and therefore it can be considered a representative sample of the local universe. A detailed description of the HRS sample is provided in <cit.>.
We make use of the extensive studies done in the literature on this sample <cit.> for comparison purposes. The optical structural properties (r-band and μ̅_e) and the stellar masses of the HRS sample used in this work are taken from <cit.>. The star formation rates (SFR) and the V-band dust attenuation values () are provided by <cit.>, with the SFR estimated as the combined average of multiple star formation tracers ranging from UV to FIR and radio continuum. Only about 200 late-type galaxies in the HRS sample have available measurements which we use in this work. The of the HRS galaxies are computed from the Balmer decrement.
The total infrared luminosity () for all the HRS sources is taken from <cit.>, who used SED fitting method to estimate the , similar to the approach we use in this work. Since the HRS also includes galaxies in the Virgo cluster, where dust can be stripped away during the interaction of galaxies with their surrounding environment, the dust content of such galaxies is principally regulated by external effects rather than secular evolution. Therefore, we removed from our comparison the HRS galaxies with a large HI gas deficiency parameter (HI-def > 0.4), which is an indicator of environmental interactions <cit.>. Our HRS comparison sample now consists of 159 galaxies. A detailed description of the compilation of the HRS data is given in .
Although the HRS is a K-band selected sample, it is a well-studied local sample of galaxies with high-quality data. Therefore, throughout this work, we use the HRS as a control sample from the literature to compare and validate our results.
§ SED FITTING
§.§ Method
We used the Code Investigating GAlaxy Emission ([<https://cigale.lam.fr/2022/07/04/version-2022-1/>]; ) SED fitting tool to estimate the physical parameters of the galaxies in our sample, in particular, the stellar mass, SFR, total infrared luminosity and dust attenuation. uses an energy balance principle where the stellar emission in a galaxy is absorbed and re-emitted in the infrared by the dust. This enables us to simultaneously fit the UV to FIR emission of the galaxies in our sample. The input parameters we used for our SED fitting are given in Table <ref>.
We use the <cit.> stellar population synthesis models with a <cit.> IMF and a fixed sub-solar stellar metallicity of 0.008 (0.4 Z_⊙)[Adopting different metallicity values were found to have only a negligible impact on the overall results presented in this paper. Therefore, as we focus mainly on LSBs, we chose to keep the metallicity at a sub-solar value to reduce the number of free parameters in our fitting procedure.].
We also adopted a flexible star formation history (SFH) from <cit.> which includes a combination of delayed SFH with the possibility of an instantaneous recent burst/quench episode. Such an SFH was successfully used to reproduce a broad range of galaxy properties in the local universe <cit.>. The range of values adopted for the SFH is given in Table <ref>.
We also include dust attenuation, adopting
the module of , which is a modified version of the well-known <cit.> attenuation curve, extended with the <cit.> curve between the Lyman break and 150 nm. This module also provides a possibility of changing the slope as well as the addition of a UV bump in the attenuation curve. In this work, we
fix these parameters to their standard value to reduce the number of free parameters as we have only 6 photometric bands for a large fraction of our sample.
The module treats the stellar continuum and the emission lines differently, with the latter being attenuated more by the dust (this difference in attenuation of the continuum and the lines is controlled by a factor, which is kept as a constant, as shown in Table <ref>). The color excess of the lines, E(B-V)_ lines
, is left as a free parameter with values ranging from 0 to 2 mag.
Once the dust attenuation is modeled, we need to use a dust emission module to model the re-emission of the attenuated radiation in the MIR to FIR. For this purpose, we adopted the <cit.> dust emission models based on nearby star-forming galaxies. The <cit.> models only have two free parameters (AGN fraction and the slope of the radiation field intensity, α). Since only less than 0.5% of local dwarf galaxies possess an AGN <cit.>, in this work, we assume an AGN fraction of zero for our sample as it mostly consists of low-mass galaxies with a median r-band absolute magnitude of the order of -17 mag, as shown in Fig. <ref>.
For the slope of the radiation field intensity, we use a fixed value[We verified that a variation of the interstellar radiation field slope α from 2 to 3 does not make any significant change (a change of less than 0.1 dex on all our estimated quantities) in our SED results.] of α=2.
We performed the SED fitting of our sample with over 130 million models (∼200000 models per redshift). For the galaxies without any detection in the MIR/FIR regime (>7 μm), we use the 5σ flux upper limits discussed in Sect. <ref>. These upper limits are important in constraining the IR properties of our optically selected galaxies. treats the upper limits
in a mathematically correct way to compute the total of a SED.
After the SED fitting, we obtained a median reduced value (χ^2_r) of 0.95 with an absolute deviation of 0.64 (see Fig. <ref>). About 94% of the sample (1631 out of 1743 galaxies) has χ^2_r less than an arbitrary value of 5. From hereupon, we exclude all the remaining sources with χ^2_r > 5 for our further analysis. Our final sample now consists of 1631 galaxies (1003 LSBs and 628 HSBs).
Fig. <ref> gives an example of the best-fit SEDs obtained for an LSB and HSB galaxy. We can see that for both galaxies we obtain a good fit, with the upper limits providing strong constraints on the IR emission of the galaxy without any MIR/FIR detection.
§.§ Robustness of the SED fitting results
The robustness of the estimated physical parameters from our SED fit was verified by several tests. For each parameter, throughout this work, we used the Bayesian mean and standard deviation of the quantities estimated by , based on the probability distribution function of the tested models. This ensures a more robust estimate of a quantity and its uncertainty, rather than directly using the best-fit model parameter, especially in case of degeneracy between physical parameters.
Another feature we used to check the reliability of the estimated parameters is by using a mock analysis provided by . In this test, builds a mock catalog with synthetic fluxes for each object based on its best-fit SED. The synthetic fluxes of each filter are modified by adding a random noise based on the uncertainty of the observed fluxes in the corresponding filter. Later, performs the same calculations on this mock catalog as done for the original observations to get the mock physical parameters. The results of the mock analysis are given in Appendix <ref>. We see that the stellar mass is the most well-constrained quantity, with the square of the Pearson correlation coefficient (r^2) equals 0.99, followed by the total infrared luminosity (r^2=0.85), the SFR (r^2=0.81) and the V-band attenuation (r^2=0.77).
Although the SFR, and have a larger scatter (0.41 dex, 0.31 dex, and 0.14 mag, respectively),
based on the linear regression analysis shown in Fig. <ref>, we can still consider them reliable as estimates.
We also performed yet another test to verify the robustness of our estimated physical quantities. A separate SED fitting, similar to our original fits, was done for only the FIR-detected galaxies in our sample (53 galaxies with detection in either Herschel PACS/SPIRE), but this time only using their optical ugrizy-bands photometry. This was done to check how well we can recover the “true” quantities by only using the ugrizy photometry. We compared the results of this fit with our original fit results and found that for the FIR-detected galaxies, the , SFR, and obtained from the original fit and the optical-only fit have a mean difference of -0.09 dex, -0.26 dex, -0.24 dex and -0.07 mag, respectively, as given in Fig. <ref>. The negative values indicate that a fit using only optical bands (or galaxies with only optical detection) in general has overestimated quantities but only by a few tenths of dex. We verified that this trend remains the same for our entire sample if we perform the SED fitting without using any flux upper limits in the MIR and FIR. Similarly, we examined how a change in our upper limit definitions from 5σ to 2σ in the SED fitting affects our results. Such a change only has a negligible effect on our overall results with the stellar mass being unchanged and the SFR, and changed by only 0.05 dex, 0.16 dex and 0.02 mag.
Table <ref> provides all the estimated parameters of our sample.
§ RESULTS
Figure <ref> shows the distribution of several physical parameters (stellar mass, stellar mass surface density, SFR, and ) obtained after the SED fitting discussed in Sect. <ref>. Our sample predominantly consists of low-mass galaxies with both the LSBs and HSBs having a median stellar mass of 10^8.3 and 10^8.8 , respectively. The HRS sample lies along the massive end of the distribution with a median stellar mass of 10^9.5 . Using the stellar mass and the measured radius (as discussed in Sect. <ref>), we estimated the stellar mass surface densities () of our sample following <cit.> as shown in Eq. <ref>:
Σ_ star = M_ star/2π R_e^2,
where R_e is the r-band half-light radius and is the stellar mass.
Equation <ref> is a widely used method in the literature to estimate for both LSBs and HSBs <cit.>. Although several other methods also exist to obtain , many of them provide similar values without changing the global properties of our sample. For instance, we tried estimating following <cit.>[Following <cit.>, of our sample can also be estimated as:
logΣ_ star ( M_⊙ pc^-2) = 0.4×(M_r,⊙ - μ_r) + log M/L_r + 8.629,
where M_r,⊙ is the absolute magnitude of the sun in the r-band filter (M_r,⊙ = 4.64 mag for HSC r-band), μ_r and M/L_r are the r-band surface brightness and stellar mass-to-light ratio, respectively.] using our observed and the stellar mass-to-light ratio (M/L) obtained from the SED fitting (ratio of the stellar mass and the observed r-band luminosity). This method does not rely on the measured values as in Eq. <ref>. We found that the estimates from both methods are similar with a mean difference of -0.15 dex (in general, the second method gives a slightly higher ). However, we should note that the above two methods only provide an average value of the of a galaxy, and therefore such minor differences connected to the adopted methodology can be neglected. Estimating the “true” value of requires resolving individual stellar populations as well as information on the radial distribution of dust that can affect measurements. With our current data in hand, it is beyond the scope of this work. Therefore, from hereupon, we adopt the values estimated using the simple and widely used method from Eq. <ref>, and their distribution is shown in Fig. <ref>.
The also follow a similar distribution as the stellar mass with the LSBs and HSBs having a median of 10^6.9 and 10^7.4 , respectively, whereas the HRS sample with a value of 10^7.8 . In terms of the star formation rate, LSBs and HSBs have a median SFR of 10^-2.2 yr^-1 and 10^-1.6 yr^-1, respectively, and the HRS galaxies with a corresponding value of 10^-0.4 yr^-1. The shows a similar distribution as the SFR, with the LSBs, HSBs, and the HRS galaxies having median values of 10^7.4 , 10^7.7 and 10^9.3 , respectively. Figure <ref> also shows the distribution of the . For both the LSBs and HSBs, we find a median of 0.1 mag, with values ranging from almost zero to 2 mag. The HRS sample has a higher median of 0.4 mag.
From the above comparison of the physical parameters, we can see that, our sample extends towards the low-mass regime, as well as lower SFR, and , much more than the HRS sample.
In the following subsections, we investigate the dependence of these quantities as a continuous function of , in an attempt to understand how the geometrical distribution of stars within galaxies affects their global parameters. We choose the over for our comparisons due to several reasons. The is a widely used quantity in the literature to compare galaxy physical properties and provides a more intrinsic physical quantity than . Moreover, although is a directly observed quantity, its value depends highly on the choice of an observed filter, whereas is less affected by that. In the Sect. <ref> we show a comparison of the and our sample.
§.§ Optical surface brightness
The surface brightness of a galaxy is the distribution of its stellar light per unit area. It is related to the total stellar mass surface density of a galaxy in the same way as galaxy luminosity and stellar mass are related by their mass-to-light ratio. Although there are several relations in the literature that explores the connections between galaxy surface brightness, luminosity, and stellar mass <cit.>, there exists a large scatter among such relations. For instance, <cit.> illustrates that for a fixed stellar mass, galaxies show a large scatter in their surface brightness up to ∼3 , ranging from LSBs to HSBs. Although it is well known that the stellar mass is one of the main drivers of galaxies' properties <cit.>, considering that a large scatter exists at any given stellar mass for the surface brightness, it is important to explore the possible trends in surface brightness associated with other quantities. In Fig. <ref> we explore such a relation using our observed and the stellar mass surface density ().
Our sample covers a large range of surface brightness (∼7 order of magnitudes) and stellar mass surface densities (3 dex) from bright to very faint galaxies. This is about 4 orders of magnitude deeper in surface brightness than the HRS sample. For the HSBs ( <23 ), the follows a linear trend with , consistent with the observations from the HRS sample. However, for the LSBs ( >23 , which the HRS sample does not probe), the brighter tail (23 < < 24.5 ) closely follows the linear trend of the HSBs, but the fainter end ( >24.5 ) diverges from this trend to form a flattening of around ∼ 10^7 for the faintest sources.
We made an error-weighted linear fit to the full sample (as shown in Fig. <ref>) to obtain a best-fit relation as given in Eq. <ref>,
logΣ_ star = (-0.40±0.01) μ̅_ e, r + (16.31 ± 0.13),
where μ̅_ e, r and are in and units, respectively.
Obviously, this relation is determined by the stellar mass-to-light ratio and its eventual dependence on the stellar mass surface density. It is remarkable that we obtain a slope of -0.4, as expected if the mass-to-light ratio does not depend on the stellar mass surface density. Our best-fit line lies very close to a constant mass-to-light ratio of 1 M_⊙/L_⊙ (see Fig. <ref>).
The majority of our sample is within the 3σ confidence level of the best-fit line (grey shaded region in Fig. <ref>), except for about 2.5% of the sample (38 galaxies, among which 36 are LSBs and 2 are HSBs) that lies outside the 3σ range of the best-fit. These outliers are mostly LSBs with a high stellar mass surface density. This indicates a higher mass-to-light ratio for these galaxies. Using the r-band luminosities and the stellar masses of our sample, we estimated that the outliers have a median mass-to-light ratio (M/L_r) of 3.4 M_⊙/L_⊙, compared to 1.1 M_⊙/L_⊙ for the full sample, making them distinct outliers.
Since the definition of our outliers given in Fig. <ref> depends on the choice of the degree of the fit, we also performed a test with a polynomial fit of order 2. We found that the polynomial fit provides a better fit with smaller residuals than the linear fit and reduces the number of outliers from 38 to 11. However, such a fit can also be affected by any incompleteness at the low surface brightness range. Moreover, in the polynomial fit, we lose an important piece of information that we have in the linear fit. The linear fit reproduces very well the trend in the HSB regime, and the outliers in the LSB regime are clearly a population of galaxies that are distinct from their HSB counterparts, as they lie in a range of high fiducial M/L ratio. This is a very distinct behavior, and we are interested in studying those cases. Therefore, from hereupon, we adopt the linear fit as given in Eq. <ref> and the 38 outliers obtained from it.
§.§ Specific star formation rate
Figure <ref> shows the dependence of the specific star formation rate (sSFR) of our sample as a function of the stellar mass surface density. Majority of our sample (∼73%) are star-forming galaxies with sSFR > 10^-11 yr^-1 <cit.>. We can see that for the star-forming galaxies, the sSFR is mostly flat with respect to the stellar mass surface density, but with a slight indication of a decrease in sSFR from the low to the high stellar mass surface density until ∼ 10^8 . Beyond this value, the sSFR shows a sudden decline to reach the population of quiescent galaxies (with a large scatter and big uncertainty in the sSFR of the order of ∼1 dex for quiescent galaxies). This trend is similar to what is observed in the HRS sample too, although the HRS sample, on average, has a higher sSFR than our sample. Interestingly, the outliers discussed in Sect. <ref> lie equally along the star-forming and quiescent part of the sample. The LSBs and HSBs, on average, have very similar sSFR values (median log sSFR of -10.5 yr^-1 for the LSBs and -10.4 yr^-1 for the HSBs), in comparison to the slightly higher sSFR of the HRS galaxies (median log sSFR of -9.9 yr^-1). Our sample, therefore, brings an important extension of the sSFR- relation in the regime of low surface brightness galaxies.
§.§ Specific infrared luminosity
Estimating the dust mass of galaxies requires a proper constraint on the peak of the FIR emission. However, considering the data we have for our sample, it is not possible to determine the dust masses. Therefore, we use the total IR luminosity of our sample obtained from the SED fitting[The values from CIGALE were computed by integrating the full dust emission model (shown as the red solid curves in Fig. <ref>) over an arbitrarily large wavelength range used in the modeling. This should, in practice, give very similar values as the commonly estimated in the literature within the wavelength range of 8-1000 μm.] discussed in Sect. <ref> as a proxy for the dust mass <cit.>. Similarly, the ratio of the and stellar mass (L_ IR/M_ star, which we term here as the specific infrared luminosity or the sLIR) is used to probe the specific dust mass (M_ dust/M_ star) of our sample. Specific dust mass of galaxies is an important measure of dust production <cit.>, as well as dust destruction processes and dust re-formation mechanisms <cit.>.
Figure <ref> shows the variation of the specific infrared luminosity as a function of the stellar mass surface density.
At the brightest end of Fig. <ref>, the sLIR rises steeply with decreasing stellar mass surface density until ∼10^8 , which is similar to the trend seen in Fig. <ref> for the sSFR of quiescent galaxies. This steep rise is observed for the HRS sample too. Below ∼10^8 , the sLIR remains mostly flat towards lower stellar mass surface densities, as also seen in the HRS sample, which however, lies along the higher sLIR part of the sample. Therefore our sample allowed us to explore the trend of increasing specific dust content with decreasing the stellar density, at lower densities than what was found in HRS. We find that dust emission is present at low densities, but with saturation in the specific dust content rather than an increase.
Moreover, similar to what was observed with the sSFR, both the LSBs and HSBs in our sample, on average, have comparable sLIR values of 10^-0.9 L_⊙/M_⊙ and 10^-1.1 L_⊙/M_⊙, respectively. The HRS, on the other hand, lies along the higher sLIR tail of the distribution with a median value of 10^-0.2 L_⊙/M_⊙. A fraction of our HSBs also has sLIR similar to what is found in HRS. The outliers from the - relation occupies the transition region of low to high sLIR, with many having high sLIR values as the HRS sample. Therefore, from the distribution given in Fig. <ref>, we can infer that LSBs have similar sLIR values to that of HSBs, although they have a lower absolute . Moreover, since we observe a similar trend in sLIR and sSFR, both these quantities might be related. However, it is hard to disentangle them based on star formation activity and dust emission since we do not know much about the infrared properties of such galaxies.
§.§ Dust attenuation
Figure <ref> shows the V-band dust attenuation () of the sample with respect to the stellar mass surface density. The majority of our sample (∼ 60%) have a low attenuation with <0.1 mag. For the highest sources, which are well constrained with small uncertainties, we observe a higher attenuation, but with a large scatter. For the fainter galaxies, the attenuation steeply decreases to reach an almost negligible value close to zero. However, the uncertainties associated with the estimates of many of these faint sources are typically large. For instance, the galaxies with <10^7 and with >0.5 have an uncertainty in estimation of the order of 0.4 mag, making it hard to draw conclusions on them. Nevertheless, we still observe several faint galaxies with significant attenuation and small uncertainties. The 3σ outliers of the - relation discussed in Sect. <ref> (38 galaxies) are among them which appears to be an interesting group in terms of attenuation. From Fig. <ref>, we see that about 60% of the outliers (23 out of 38 galaxies) have a large attenuation with >0.5 mag and a mean value of 0.8±0.2 mag. Moreover, several of these outliers also have detection in the IRAC bands, similar to the IRAC-detected LSBs from <cit.>. However, none of the outliers have any detection in the MIR or FIR range.
Following <cit.>, who derived a relation between attenuation and stellar mass, we did an error-weighted fit to our data to find a similar relation between and of our sample as given in Eq. <ref>:
A_V (mag) = 10^(0.55±0.02) logΣ_ star - (4.82±0.15).
Our best-fit relation also follows a trend where <0.1 mag for the faint galaxies until ∼10^7 , beyond which we see a steep rise in for the brighter galaxies, with a large scatter (note that the scatter shown in Fig. <ref> is in logarithmic scale). The HRS sample shows a similar trend in attenuation with the stellar mass surface density, although in general, it has larger than our sample for the same , but consistent with the large scatter seen in this range. Note that only the late-type galaxies from the HRS sample have available measurements in (see Sect. <ref>). This explains the lack of high HRS galaxies with attenuation close to zero, as observed in our sample.
We also found that the steep rise of in Fig. <ref> is largely driven by the galaxies at > 10^8 , that are dominated by more massive HSBs (we do not have any LSBs beyond this value). So the trend in we see here is also linked to its dependence on the stellar mass, which is well known <cit.>. However, in the range of < 10^8 , we have a large overlap with the LSBs and HSBs of stellar masses mostly in the range of 10^8-10^9 . They are both consistent with low attenuation, except for the LSB outliers that remain as a peculiar population with high attenuation.
§ DISCUSSIONS
§.§ Are low surface brightness galaxies dust-free?
The results given in Sect. <ref> show that the majority of the LSBs in our sample ( >23 or approximately <10^7 )[A stellar mass surface density of 10^7 corresponds to an average r-band surface brightness () of 23.2 , based on the Eqn. <ref>.] have a very low amount of dust attenuation. Among the LSBs (1003 out of 1631 galaxies), about 80% have a negligible attenuation with an <0.2 mag, and a median attenuation of ∼0.09 mag. This is consistent with few other observations of LSBs from the literature where extreme LSBs like the ultra-diffuse galaxies (UDGs) were found to have a very low attenuation with a median of ∼0.1 mag <cit.>. However, <cit.> found a median of 0.46 mag for their sample of LSBs from the SDSS survey. Such a higher attenuation in their LSB sample could be attributed to the fact that the LSBs from <cit.> where massive galaxies with a median stellar mass of 10^9.5 , in comparison to our low-mass LSBs with a median stellar mass of 10^8.3 . Moreover, the values from <cit.> were computed from the Balmer decrement, without applying a correction for the differential attenuation of nebular lines and the continuum as shown in Table <ref>. Applying such a correction will reduce their median to ∼0.2 mag, which is close to the values we observe for our sample of LSBs.
Only about 4% of the LSBs in our sample (2.5% of the total sample) have a significant attenuation with >0.5 mag. These are 3σ outliers from the - relation as shown Fig. <ref> and Fig. <ref>. This could indicate that a fraction of low surface brightness galaxies with high stellar mass-to-light ratios (M/L_r > 3 M_⊙/L_⊙) seem to have a higher attenuation.
We also looked into how much the attenuation affects the position of the outliers in the - relation. For this purpose, we applied a correction for the observed surface brightness of the outliers using the estimated values. We converted the V-band attenuation to the attenuation in the HSC r-band using the <cit.> attenuation law, before correcting for the r-band surface brightness. Figure <ref> shows the change in the position of the outliers after the attenuation correction. All the outlier LSBs still remain LSBs with <23 . However, we can see that about 50% of them move into the 3σ confidence range of the - relation after the attenuation is corrected. This indicates that the effect of attenuation plays a significant role in making a fraction of LSBs appear fainter in the observations. However, attenuation alone cannot explain all the outliers in our - relation.
Giant low surface brightness galaxies (GLSBs) are another interesting and extreme class of objects among LSBs <cit.>. The infrared properties of three GLSBs (Malin 1, UGC 6614, and UGC 9024) were explored by <cit.> using Spitzer observations. All of them were undetected at MIR and FIR wavelengths allowing only to obtain upper limits in their infrared properties. Figure <ref> shows a comparison of their stellar mass surface density[The stellar masses and sizes of the GLSBs were taken from <cit.> and <cit.>, respectively, to estimate their stellar mass surface densities.] and surface brightness[The values of the GLSBs were estimated by using the B-band central surface brightness (μ_0, B) values from <cit.>. The μ_0, B values were converted to the r-band assuming a constant Sérsic index n=1 <cit.>, and a constant B-r color of 0.6 mag.] as compared to our sample.
We can see that two out of the three GLSBs (Malin 1 and UGC 6614) are 3σ outliers from the - relation. Moreover, their sSFR and sLIR are also well consistent with our sample (based on , the three GLSBs have an sSFR of 10^-10.8, 10^-10.2 and 10^-10.4 yr^-1, and sLIR of 10^-0.9, 10^-0.4, 10^-0.5 L_⊙/M_⊙, respectively). Therefore, from Fig. <ref>, it is likely that these GLSBs also have a significant dust attenuation, similar to the outliers we observe in our sample, although their previous infrared observations do not provide any estimate of attenuation.
Apart from the observational data on GLSBs, <cit.> provided some estimates on the dust attenuation of GLSBs from the EAGLE simulations (see their Fig. A2). They obtained an average of 0.15 mag for their simulated GLSB sample, with a range of values extending from =0.4 mag for the brighter sources (∼23 ) to =0.05 mag for the faintest ones (∼26 ). Therefore, comparing our results with observations and simulations allows expecting the presence of some detectable dust attenuation in GLSBs as well.
§.§ Possible caveats
The analyses presented in this work could be affected by several caveats. Firstly, since we attempt to study the optical as well as infrared properties of our sample (surface brightness, radius, stellar mass, SFR, total infrared luminosity, and dust attenuation), it requires extensive multi-wavelength data coverage in the UV to FIR range. However, as noted in Sect. <ref>, for ∼85% of our sample, only the deep 5σ upper limits can be provided in the MIR to FIR regime (from 7 μm to 500 μm wavelength range). In those cases, these detection limits are used to put constraints on the infrared emission of the SED. Such an approach can introduce significant uncertainties in the estimated infrared properties of our sample (especially in the and ). We performed several tests to quantify and minimize
the effect of such uncertainties on our results (see Sect. <ref> for more details on the robustness of the estimated parameters). Additionally, our sample selection with the requirement to have a u-band detection, is also aimed to minimize such uncertainties. The u-band, being close to the UV part of the spectrum, is more sensitive to the effects of dust attenuation and thereby the re-emission in the infrared.
Another potential uncertainty in our results can arise from a possible redshift dependence of the quantities. However, considering the very narrow range of redshift used in this work (z<0.1), and since we used redshift-independent quantities for this work, we do not expect to have an impact of such an effect. We verified that there are no significant variations in our sample with the redshift, and our results will remain unchanged. However, the accuracy of the photometric redshift estimates used in this work from <cit.> can be yet another source of uncertainty. Considering the very faint nature of the majority of our sample, it is not feasible to obtain spectroscopic redshifts for all of them (all the galaxies with spectroscopic redshifts in our sample are HSBs, as shown in Table <ref>). Also, as discussed in Sect. <ref>, in general, the photometric redshift estimates we used have a higher accuracy and a lower catastrophic outlier rate. The presence of the u-band also significantly improves the photometric redshift estimates, as noted by <cit.>. Nevertheless, we made an estimate of the effect of the photometric redshift uncertainty on our measured physical quantities. For a typical redshift uncertainty of σ_z_ p=0.06 for our sample (as discussed in Sect. <ref>), we found that, on average, the and changes by a large factor (0.47 dex and 0.21 dex, respectively). However, since we compute the as a ratio of and as given in Eq. <ref>, they cancel each other for the to have only a 0.04 dex difference with the change in redshift, making an almost redshift-independent quantity. In the case of the sSFR, sLIR and also we see only a negligible difference (0.13 dex, 0.04 dex, and 0 dex, respectively).
Therefore, considering all the above potential caveats, we conclude that our estimates are still robust within the uncertainties discussed. The approach we used in this work will be useful for constraining the physical quantities of LSBs, especially with the upcoming surveys like LSST that will observe thousands of them in the ugrizy-bands, with only limited multi-wavelength counterparts.
§ CONCLUSIONS
We present an optically selected sample of 1631 galaxies at z<0.1 from the North Ecliptic Pole Wide field. We cross-matched this sample with several multi-wavelength sets of available data ranging from UV to FIR, and did a SED fitting procedure to obtain key physical parameters like the stellar mass, SFR, and . We also extracted radial surface brightness profiles for the sample and estimated their average optical surface brightness and sizes.
Our main results can be summarised as follows:
* Using the measured average r-band surface brightness (), our sample consists of 1003 low surface brightness galaxies (LSBs; >23 ) and 628 high surface brightness galaxies (HSBs; ≤23 ).
* The LSBs have a median stellar mass, surface brightness, and effective radius of 10^8.3 , 23.8 and 1.9 kpc, respectively. For the HSBs, the corresponding median values are 10^8.8 , 22.2 and 2.2 kpc. Similarly, the LSBs have a median SFR and of 10^-2.2 yr^-1 and 10^7.4 , in comparison to 10^-1.6 yr^-1 and 10^7.7 for the HSBs. For both the LSBs and HSBs, we found a median of 0.1 mag.
* A comparison of the surface brightness () as a function of the stellar mass surface density () showed that our sample follows the linear trend for the HSBs, which is consistent with the HRS sample from the literature. However, for the LSBs, we observe several outliers from the linear - relation, indicating a higher mass-to-light ratio for them. Most of these outliers also have a high dust attenuation.
* We analyzed the variation of the specific star formation rate (sSFR) and specific infrared luminosity (sLIR) of our sample with respect to their stellar mass surface density. Among the star-forming galaxies (sSFR >10^-11 yr^-1), the sSFR is mostly flat with respect to the change in stellar mass surface density, but with a slight indication of an increase in sSFR for the lowest galaxies. The sSFR steeply declines for the highest sources that are quiescent. A similar trend is observed for the sLIR too. We found that both the LSBs and HSBs in our sample have a comparable average sSFR and sLIR. The HRS sample, in general, lies along the higher sSFR and sLIR regime compared to our sample but they are consistent and agree within the scatter we observe.
* The change in dust attenuation () with the stellar mass surface density of our sample shows that galaxies with a higher have a larger and scatter, contrary to the flat/decreasing trend observed for the specific dust luminosity. The dust attenuation steeply declines and becomes close to zero for the majority of LSBs.
However, in about 4% of the LSBs that are outliers, we observe a significant attenuation with a mean of 0.8 mag, showing that not all the LSBs are dust poor. Moreover, the extreme giant LSBs in the literature also show some similarities to these outlier LSBs, indicating the presence of more dust content in them than previously thought.
This work provides measurements that can be further tested using current/upcoming observations from LSST and JWST, where a large number LSBs and HSBs will be observed at unprecedented depth. LSST will provide deep optical imaging data over large areas of the sky, allowing for a detailed study of the statistical properties of galaxies, including LSBs. On the other hand, JWST's high sensitivity and resolution imaging in the near-infrared (NIR) and mid-infrared (MIR) regimes, as well its spectroscopic capabilities, will enable a comprehensive study of the infrared properties of such galaxies, including their dust content, gas metallicity and star formation activity. The data from these facilities will complement this work to provide a clear picture of the properties of LSBs in the context of galaxy evolution.
J and KM are grateful for support from the Polish National Science Centre via grant UMO-2018/30/E/ST9/00082. W.J.P. has been supported by the Polish National Science Center project UMO-2020/37/B/ST9/00466 and by the Foundation for Polish Science (FNP). J.K. acknowledges support from NSF through grants AST-1812847 and AST-2006600. M.R. acknowledges support from the Narodowe Centrum Nauki (UMO-2020/38/E/ST9/00077). M.B. gratefully acknowledges support by the ANID BASAL project FB210003 and from the FONDECYT regular grant 1211000. D.D acknowledges support from the National Science Center (NCN) grant SONATA (UMO-2020/39/D/ST9/00720). D.D also acknowledges support from the SISSA visiting research programme. A.P. acknowledges the Polish National Science Centre grant UMO-2018/30/M/ST9/00757 and the Polish Ministry of Science and Higher Education grant DIR/WK/2018/12.
aa
§ SED FITTING ROBUSTNESS
§.§ Mock analysis
§.§ Comparison of fits with and without FIR data
§ DATA TABLE WITH THE PHYSICAL PROPERTIES OF THE SAMPLE
|
http://arxiv.org/abs/2306.01417v1
|
20230602100712
|
The Flawed Foundations of Fair Machine Learning
|
[
"Robert Lee Poe",
"Soumia Zohra El Mestari"
] |
cs.CY
|
[
"cs.CY",
"cs.LG"
] |
]The Flawed Foundations of Fair Machine Learning
[1,2]Robert Lee [email protected]
2,3]Soumia Zohra El [email protected]
*[1]LIDER Laboratory, Sant'Anna School of Advanced Studies, Via Santa Cecilia 3, Pisa, 56127, Italy
[2]Interdisciplinary Center for Security, Reliability and Trust, University of Luxembourg, 6 avenue de la Fonte, Esch-sur-Alzette, L-4364, Luxembourg
The definition and implementation of fairness in automated decisions has been extensively studied by the research community. Yet, there hides fallacious reasoning, misleading assertions, and questionable practices at the foundations of the current fair machine learning paradigm. Those flaws are the result of a failure to understand that the trade-off between statistically accurate outcomes and group similar outcomes exists as independent, external constraint rather than as a subjective manifestation as has been commonly argued. First, we explain that there is only one conception of fairness present in the fair machine learning literature: group similarity of outcomes based on a sensitive attribute where the similarity benefits an underprivileged group. Second, we show that there is, in fact, a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist, and that the trade-off presents an existential threat to the equitable, fair machine learning approach. Third, we introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes. Finally, suggestions for future work aimed at data scientists, legal scholars, and data ethicists that utilize the conceptual and experimental framework described throughout this article are provided.[This article is a preprint submitted to the Minds and Machines Special Issue on the (Un)fairness of AI on May 31st, 2023.]
[
[
Received 25 March 2021 ; accepted 28 January 2022
=======================================================
§ INTRODUCTION
Automated decision-making systems are increasingly being used to render high-impact decisions regarding human beings. All the while, notorious accounts of algorithmic discrimination and algorithmic unfairness have been reported by news outlets over the past decade. Due to the many concerns about the potential societal impacts of machine learning, governments are beginning to put forward policy positions and draft regulations. In the AI Bill of Rights, the White House states that automated decisions should be designed and deployed to achieve equitable outcomes. In Europe, the AI Act states that automated decisions should not perpetuate historic patterns of discrimination or create new forms of disparate impact. Right now, policy-makers and regulators are relying heavily on the fair machine learning community to present solutions. However, a unipolar conception of fairness is being represented and advocated for by the fair machine learning community, which does not reflect the same breadth of opinion that exists in wider society.[Problems related to the fair distribution of finite resources are continually addressed in politics, law, and culture with varying views about how best to address those problems.]
To a limited extent, researchers in the field have understood that they had not happened upon an empty field (of research) but instead a garden that has been fostered, cared for, and in some cases ignored for a very long time. Perspectives from many domains have been incorporated into the literature: legal doctrines like disparate impact <cit.>, fair distribution philosophies dealing with egalitarianism and merit <cit.>, socio-technical critiques of technological solutionism <cit.>, and concepts from feminist communications and data science like the myth of objectivity and meritocracy <cit.>. Due to the multidisciplinary nature of the field, a sufficient framework for understanding the technical limitations and implications of machine learning combined with the goals and implications of “fairness” in between and within related disciplines has yet to be fully realized. This is partly due to the fact that most articles published from the machine learning community are difficult for those in the humanities or legal communities to understand and vice versa. Thus researchers at the crossroads of fairness, discrimination, and machine learning are to some degree speaking past one another. Throughout the entire article, each audience is kept in mind and essential terminology is defined.
In section <ref>, we argue that the common categorization of fairness in the literature misleading implies that there are three competing conceptions of fairness, when in fact there is one: fairness defined as the similarity in outcomes between groups when that similarity acts in the benefit of underprivileged groups. In section <ref>, we argue that the trade-off between statistically accurate outcomes and group similar outcomes is an independent external constraint on fair machine learning, rather than a mere framing problem as is commonly argued. And, we address some misconceptions about the implications of that trade-off found in the literature. In section <ref>, we carefully address a specific example, the highly influential view proposed by Friedler et al. in <cit.>. Finally, in section <ref>, we will introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes.
§ FAIR MACHINE LEARNING MEANS GROUP SIMILARITY IN OUTCOMES
In the fairness literature, metrics are commonly split into three categories: group fairness, causal fairness, and individual fairness. This categorization misleading implies that there are three distinguishable conceptions of fairness at play in the fair machine learning literature, when in reality there is but one conception of fairness technically defined in metrics: fairness defined as group similarity in outcomes based on a given sensitive attribute. The mantra of group fairness is that groups should be treated equally or at least similarly. The mantra of group fairness is, however, misleading because "fairness" defined as group similarity ensures the similarity of outcomes, not solely the similarity of treatment.
Causal fairness shares the same conception of fairness as group fairness and is only different in so far as a different set of techniques are used to achieve that goal <cit.>. The same is true of individual fairness, even though individual fairness has incessantly been offered as an opposing conception of fairness to group fairness definitions with its own mantra of similar individuals should be treated similarly. The maxim of similar treatment that individual fairness embodies is an Aristotelian principle of consistency <cit.>. The individual fairness definition states that there should be consistency between the relevant features of two different persons and their respective outcomes in comparison to one another. More specifically, the similarity between the features of two individuals (measured as a distance) should be preserved between their respective labels.[Labels or target variables refer to the variable to be predicted by the machine learning algorithm.]
Note that the principle of consistency, defined as distance between spaces, could be used to detect whether there exists inconsistencies between the relevant features and ground truth of the sample, as well as when there exists inconsistencies between the sample and the outcomes. For instance, the inconsistencies could be seen as an indicator of unreliable data collection processes where data was incorrectly reported, or that the data sample is missing a set of uncollected features that could explain the current inconsistency. We emphasize, however, that the individual fairness metric itself is not concerned with determining the representativeness of the sample nor with determining how well the outcomes generalize to a target population.
Individual fairness defines fairness as a comparison of geometric distances. Once distance is defined, individuals can be compared and inconsistencies (unfairness) can be rectified. However, the distance must be defined, and defining a distance presupposes prior knowledge about “fairness." In other words, the principle of consistency is empty <cit.>, and so requires a substantive notion of fairness to define what makes similar cases similar (i.e. the distance). Thus, there is a circularity in the proposition that individual fairness is a definition of fairness. It may be that the principle of consistency is a necessary requirement for fairness to be achieved, but consistency or similarity alone is not sufficient to constitute an independent notion of fairness <cit.>. Some advocates of the individual fairness approach argue that substantive notions of fairness need be defined by domain experts <cit.>, others argue that the distance can be learned <cit.>, while still others argue that the group fairness metrics should fill the void <cit.>. In any case, individual fairness should be understood as a tool to implement fairness once defined, rather than as a conception of fairness in and of itself.
Thus, there is but one conception of fairness technically defined in fairness metrics: group similarity in outcomes based on a given sensitive attribute. To build machine learning models that produce outcomes that are group similar, the first step is to define a measure or metric that reflects a notion of acceptable group dissimilarity. There exists a “zoo” of these metrics <cit.> that define the acceptability of group dissimilarity differently using notions of statistical independence, sufficiency, and separation. A number of surveys and reviews on the taxonomy of metrics and interventions have been published <cit.>.
Once a metric of acceptable group dissimilarity is chosen, one of the three following strategies can be adopted: (1) pre-processing the input data to remove, alter, or curate the underlying data that lead to group dissimilarities <cit.>, (2) in-processing where the model is constrained to produce group similar outcomes by modifying the learning algorithm's objective functions <cit.>;
and/or (3) post-processing the output of the model, rather than changing anything about the sample or hypothesis assumptions, by using an algorithm based on a function that detects potential group dissimilarities and adjusts the labels accordingly <cit.>.
As many have observed, requiring similarity of outcomes can either act in the benefit or detriment of underprivileged groups depending on the decision context <cit.>—where privileged and underprivileged groups are identified by a given legal or moral code. To achieve an equitable outcome, rather than a merely equal or similar one, researchers argue that the choice of whether or not to use fair machine learning or of which metric defining the acceptability of group dissimilarity is to be used, must be context specific so that similarity in outcomes only act in the detriment of privileged groups and in the favor of underprivileged groups. Justifications for such a definition of fairness in machine learning are inspired by (1) legal doctrines like disparate impact and affirmative action in the United States or indirect discrimination and positive action in the European Union <cit.>, (2) critical race and gender studies which find disparities that disfavor underprivileged groups to constitute de facto discrimination <cit.>, (3) or distribution philosophies based on moral arguments for the acceptance of equity-based systems and the rejection of merit-based systems <cit.> (or a combination of these justifications <cit.>). In the end, fairness in machine learning generated decisions is defined as similarity in outcomes between groups when that similarity acts in the benefit of underprivileged groups to the detriment of privileged groups.
§ THE TRADE-OFF BETWEEN STATISTICALLY ACCURATE OUTCOMES AND GROUP SIMILAR OUTCOMES
Statistically accurate outcomes are the result of a robust model trained on a representative sample. Group dissimilarity in outcomes can be the result of (1) an unrepresentative sample and/or non-generalizable assumptions (inaccuracies), (2) group dissimilarities existing in the target population that are reflected in a representative sample and carried into the outcomes by a robust model (accuracies), or (3) unrepresentative sampling and/or hypothesis formulations that add additional group dissimilarity beyond the group dissimilarity already present in the target population (both). The goal of traditional machine learning is to produce statistically accurate outcomes. The traditional machine learning approach encompasses a number of techniques and practices to remove inaccuracies including those that would produce more group dissimilarity than that which is present in a target population. A target population can have imbalanced group sizes, contain anomalies like the Simpson's paradox, have sub-group validation problems, and so on.[A sub group validity problem occurs when a particular observable characteristic is valid for some groups but not for others.] In other words, differences between groups in the target population can result in group dissimilar outcomes. Wrongheadedly, many fall into the trap of thinking that dissimilarities between groups in outcomes must be the result of inaccuracies. Statistically inaccurate outcomes can certainly either exaggerate or underestimate the disparities which exist in a target population, but the solution to statistically inaccurate outcomes is to create a more representative sample and/or robust model.
Researchers should confront the fact that the goal of fair machine learning is not to increase the representativeness of a data sample or the robustness of a model. In fact the goal is the opposite. Decisions made on a representative sample have the potential to reflect the target population in the model outcomes, and those outcomes would have the same disparities between groups that exist in the target population. In other words, if the relevant (for the task) base-rate parameters of the target population result in a demographic parity of 0.4, that demographic parity would be the limit of fairness in a model that generalizes to the target population. The realization that there may be differences between groups in the world is a painful one. However, to reject that there exists differences between groups is to reject the existence of “unfairness.” In other words, if there were no differences between groups in the target population, there would be no differences in outcomes for the equitable, fair machine learning approach to address.
To illustrate how difficult it has been for the fair machine learning community to internalize this point, reflect for a moment on the difference in the use of the word “bias" between traditional machine learning and fair machine learning. Traditionally, bias is defined as a deviation from the true value of a parameter or variable. In fair machine learning, bias is defined as a deviation from group similarity <cit.>. When the true value of a parameter leads to group dissimilarity in outcomes, the true value is dubbed biased. When the word “bias" in the sense of deviation from group similarity (normative) is used to reject a true value of a parameter in statistics, the is-ought fallacy is often committed. The is-ought fallacy occurs when one reaches a normative conclusion based solely on descriptive (factual) premises. In this case, by rejecting the true statistical value based on normative concerns, the descriptive aspects of statistics are being mixed with the prescriptive aspects of morality. This terminological mix-up often results in the fallacy of equivocation, which occurs when a keyword of an argument is used with more than one meaning, leading to misinterpretation. Disguising normative judgements in the language of statistics only creates confusion.
While philosophers might best understand the thrust of the argument through examples of the is-ought fallacy, jurists might best understand by comparing the Separation Thesis found in legal positivism to the differentiation made here <cit.>. The separation thesis insists on the separation between (1) what the law is and (2) what the law ought to be. Here, there is a separation between what is (accuracy) and what ought to be (fair). In other words, the traditional machine learning approach has been a descriptive effort and the fair machine learning approach has been a prescriptive effort.
Data scientists can only ever follow best practices (under epistemological limitations) when sampling from the target population, including ensuring a sample with a relatively small standard deviation, adequate size, random sampling, and so on. Following best practices will, admittedly, never constitute a proof of representativeness. However, the lack of proof does not mean that a data scientist is unfamiliar with the target population or that they are unable to estimate group disparity. The reader may be tempted, in response to the arguments made throughout this section, to doubt whether awareness of group dissimilarity in a target population is practically possible. Note though, if awareness of group dissimilarity is not practically possible, then there would be no evidence of group dissimilarity—which many researchers in the field use as a proxy for de facto discrimination—and so no justification for the disparate impact legal doctrine.
The trade-off between statistically accurate outcomes and group similar outcomes is obvious. The logical conclusion of the trade-off is also obvious: where there exists the greatest need for the equitable, fair machine learning approach (i.e. data settings that contain large group disparities), machine learning itself is most useless.[Where "need" is defined by a normative judgement.] As the connection between the model outcomes and the target population becomes more tenuous to become less dissimilar amongst sub-groups, the use of statistical learning becomes harder to justify. In other words, the more the outcome is already known (manually coded), the less need there is for a data driven approach. A script or quota could fulfill the same purpose. Thus, the trade-off presents an existential threat to the field.
The response to the threat of the trade-off between statistically accurate outcomes and group similar outcomes has been to deny its existence by arguing that the trade-off is a subjective manifestation rather than an independently existing constraint <cit.>. For instance, authors in <cit.> argue that the conflict between “accuracy" and “fairness" is the result of framing the trade-off as an optimization problem. Their argument rests on a causal fallacy. Recognizing the “inherent conflict" between statically accurate outcomes and group similar outcomes in a data setting which contains group disparities and then optimizing between those competing interests cannot be the cause of differences between subgroups of a target population that exist independently in that data setting. The realization that there are group disparities in a target population is the realization that there are differences between groups. The question of what causes differences between groups is beyond the scope of this article. However, to whatever extent subgroups of a target population are different, there will be differences in group outcomes no matter how they are measured. Changing the framing or values that guide an automated procedure (for instance by changing the definition or measurement of economic success) would simply lead to a different group skew in outcomes. Disparities in outcomes are inherent external constraints on any categorization or classification of a world filled with group disparities.[Let us not forget that grouping based on sensitive attributes defined by the protected classes in non-discrimination law are not the only grouping that could be argued morally irrelevant.] The only way to remove disparities in outcomes between groups is to not recognize the differences between groups in the target population, and thus between individuals as well (i.e. to alter the data sample or its processing to reflect a false world.)
It is true that the trade-off is most commonly formulated as an optimization problem. For example, the lower bound of this trade-off has been estimated via proof <cit.>. And Authors in <cit.> have proven that in the case of a binary classifier it will be asymptotically possible to maximize both accuracy and fairness simultaneously only if the sensitive attribute and the target label are perfectly independent. On the other extreme, if the sensitive attribute is highly correlated with the target variable then it's only possible to maximize either the accuracy or the fairness at the same time. In between those two extremes, the trade off is determined by the strength of the correlation between the target and the sensitive attribute. As the <cit.> proof states, if the sensitive attributes and the target variable are perfectly independent of one another, the more generalizable the model is, the more group parity will be present.
Some authors use this fact to argue that accuracy and fairness are complimentary <cit.>; even going so far as to state that the “fairness-accuracy trade-off formulation also forecloses the very reasonable possibility that accuracy is generally in accord with fairness" <cit.>. Statistically accurate outcomes and group similar outcomes are not generally in accord. While it is true that under certain conditions statistically accurate outcomes and group similar outcomes are complimentary, the reliance on that truth to minimize the importance of the trade-off is highly misleading. Statistically accurate outcomes and group similar outcomes can only be complimentary in a data setting which is strictly group similar (necessarily defined as group parity in the context of perfect independence). If the data setting is already strictly group similar, there is no need for the fair machine learning approach. Fair machine learning is only required in those instances where unacceptable group dissimilarity exists, and in those instances, statistically accurate outcomes and group similar outcomes will always be uncomplimentary (i.e. the sensitive attributes and target variable will be correlated.)
Others observe that, in practice, constraining outcomes to meet an acceptable notion of group dissimilarity can sometimes increase accuracy <cit.>. Again, the observation is correct but can lead to misunderstandings. When the use of a fairness constraint increases the accuracy, either the sensitive feature and target variable are independent (and so see the above argument) or the data sample was so unrepresentative that enforcing group similarity increased the accuracy by happenstance. And, that increase in accuracy by happenstance could never go beyond the group similarity present in the target population without decreasing the generalizability of the model.
Authors in <cit.> suggest that horizontal data collection can alleviate the trade-off.[Horizontal data collection is the collection of more features.] There, the authors are falling into the trap of thinking that group dissimilarity in outcomes is the result of inaccuracies; in this case,
an incomplete feature selection process where all the relevant-for-the-task features are not collected. Other researchers suggest that the trade-off can be alleviated by the targeted, vertical collection of data, where the model is trained on an enlarged dataset that serves the objective of group parity <cit.>. This will indeed improve group similarity. However, it will also result in a sample of data that is far from being representative of the real world distribution. Moreover, calculating the accuracy on an unrepresentative test set is misleading since the generalizability of the model when deployed may be compromised by this practice. In other words, these practices curate a data sample that satisfies group similarity, and then test the model on a test set which has also been curated. Furthermore, if the purpose of collecting data is not to use the target population as an external constraint, but instead to satisfy a notion of acceptable group dissimilarity, then there is no need to waste time and money collecting more data—simply use the existing processing techniques listed in section <ref> to achieve the same result.
Throughout this section, we have highlighted fallacies, misleading assertions, and questionable practices that result from the failure to understand the relationship between statistically accurate outcomes and group similar outcomes, and we've explained that a failure to understand that relationship often leads researchers to insist that group dissimilarities must be the result of inaccuracies. In the following subsection, we will, to the best of our ability, faithfully describe one of the most highly influential examples of this phenomenon, the model of automated decision-making procedures presented by Friedler et al. in <cit.>. We believe their view requires careful attention.
§.§ Critiques of the Model of Friedler et al.
The model of Friedler et al. is based on two distinctions <cit.>. First, they make a distinction between the feature space, which contains information about people, and the decision space, which contains the decisions made about people based on the feature space. In other words, a decision making procedure takes inputs from the feature space and returns outputs in the decision space. Second, they make a distinction between the construct space and the observed space. The construct space consists of idealized representations about people and the decisions regarding them; whereas the observed space contains only the kind of information that is observable and the decisions that are the result of those observations. These two distinctions allow for the introduction of four combination spaces they call: (1) the construct feature space which contains feature constructs like intelligence or frugality (CFS); (2) the observed feature space which contains the observable or measurable features that are used as proxies for the construct features like IQ or debt-to-credit ratio (OFS); (3) the construct decision space which contains decision constructs like college success or creditworthiness (CDS); and (4) the observed decision space which contains decision proxies like college GPA or loan default (ODS).
The observed spaces are knowable, and the construct spaces are unknowable <cit.>. More specifically, the distances between individuals and groups in the observable spaces are knowable and in the construct spaces unknowable. According to Friedler et al., two worldviews emerge from the picture once construct spaces are introduced: (1) the What You See Is What You Yet (WYSIWYG) worldview and (2) the Structural Bias worldview. The WYSIWYG worldview assumes that the observed features and decisions are essentially the same as the construct features and decisions; or, in other words, that the observations approximate the constructs (loan default approximates creditworthiness). Whereas, the Structural Bias (SB) worldview wishes to find and define “structural bias” and “non-discrimination” in the transformations between these two spaces. More specifically, they wish to find group skew, defined as more distortion between groups than there is within groups in the transformations between an “unobservable” space and an observable space <cit.>. “[T]o address this complication . . . assumptions must be introduced about the points in the construct space, or the mapping between the construct space and observed space, or both” [emphasis added] <cit.>. “Structural bias” is defined as group skew between the unknowable construct and knowable observed spaces in general <cit.>, and “non-discrimination” is specifically defined as an acceptable amount of group skew between the unknowable construct spaces and the knowable observed decision space <cit.>.
Set aside, for a moment, these unknowable construct spaces and focus on their definition of “direct discrimination.” Under the SB worldview, direct discrimination is defined as the existence of group skew between the knowable, observed feature and decision spaces <cit.>, which means that direct discrimination can only be the result of a model that produces outcomes that are not representative of its sample. In other words, direct discrimination is the result of statistically inaccurate outcomes rather than the result of group disparities present in a representative sample (i.e. a problem the traditional machine learning approach can address).
Note that in all the definitions listed above, group skew is the quotient of “between group distance” and “within group distance”. In other words, to say that there exists group skew is to say that groups are different.
Again, structural bias and non-discrimination are the result of group skew in the transformations between the construct and observed spaces. Unlike direct discrimination, group skew in both of these cases is based on unknowable, assumed mappings. It is in this way, according to Friedler et al., that observational processes of an automated decision procedure cause group skew <cit.>. We wish to temper their assertion: if (1) the assumption that a construct space must exist and (2) unknowable knowledge about that space is granted, then the conclusion that observational processes cause group skew can be reached. At first glance, it may appear that the statement that observational processes cause group skew is supporting the “framing” argument rejected previously; however, the argument there was that the framing of the relationship between accuracy and fairness as an optimization problem is the cause of the trade-off between the two.
There is also a difference between: (1) group skew that is the result of more distortion between groups than there is within groups in the transformations between construct and observed spaces and (2) group dissimilarity that is the result of group disparities in a representative sample. In other words, group skew that results from observational approximations of constructs does not cause group dissimilarity in outcomes generally but only ever additionally in a data setting that contains group disparities (i.e., those settings where fair machine learning is required). Group skew defined in the SB worldview is addressed, not by producing group similar outcomes in an “unfair” data setting, but instead by selecting observations which accurately (or “correctly”) approximate constructs. However, the approximation is subject to unknowable information and rests on the assumption that decision-makers only ever want to satisfy abstract standards like creditworthiness or academic success rather than specific standards like loan default or college GPA. It is in no way clear to us that the introduction of unknowable construct spaces with definitions that mimic legal terminology produce anything more than confusion.
Regardless, the definitions of structural bias and non-discrimination, developed by Friedler et al., are irrelevant for the equitable, fair machine learning approach. Here is yet another instance of researchers falling into the trap of thinking that group dissimilarity has to be the result of inaccuracies, this time ethereal inaccuracies. As stated previously, the goal of fair machine learning approach is not to produce statistically accurate outcomes nor ethereally “correct” outcomes, but instead to produce equitable, group similar outcomes in data settings that would otherwise produce inequities. In reality, their model does not address issues of equity in automated decision-making systems.
§ EVALUATION OF THE TRADE-OFF
This section provides a numerical proof-of-concept evaluation of the trade-off between statistically accurate outcomes and group similar outcomes. First, the used data samples and pre-processing techniques will be described and then the results will be discussed.
§.§ Experimental Setup
In order to study this trade-off, we synthetically generate three datasets with group disparities where the probability to receive a favourable outcome is not equal for all groups. Each dataset is composed of 3 variables: group (G) representing a dependant and sensitive, binary attribute, value (V) the insensitive variable and outcome (Y) as a binary label or targeted value which has two values 0 and 1 for the unfavourable and favourable outcomes respectively.
Table <ref> describes the characteristics of each dataset. For fairness metrics, we use three group fairness metrics, namely: disparate impact, equalized odds, and statistical parity. In our experiment, we used pre-processing techniques to remove group skew, and we show how these techniques affect the statistical accuracy of outcomes in a data sample where the skew is large. We can clearly see that the resulting distributions will be very different than the original distribution of data. Group skew is computed as the ratio between the between-groups variance and the within-groups variance:
GS = σ_B^2/σ_W^2
where:
σ_B^2 = ∑_i=1^N_g N_i * (μ_i - X)
σ_W^2 = ∑_j=1^n (x_ij -μ_i)
Where GS denotes group skew, σ_B^2 is the between group variance and σ_W^2 is the in-between group variance. N_g is the total number of groups and n is the number of observations in each group i.
Between group Variation σ_B^2 is the total variation between each group mean μ_i and the overall mean X.
Within-group variation is the total variation in the individual values in each group x_ijand their group meanμ_i.
1.1
The Disparate Impact Remover:
to remove disparate impact(we note DIR(repair value)) we use the pre-processing algorithm proposed by Feldman et al. <cit.> with 3 different repair levels 0.3 , 0.5 and 1.0 to control the degree of overlap of the distributions of the two groups, results in table <ref>.
The Equalised Odds Remover: to remove the effect of equalised odds we experimented using 3 different algorithms namely: (1) reweighing algorithm <cit.>, (2) Fair Balance <cit.> and FairBalanceVariant <cit.>, results in table <ref>.
The Statistical Disparity Remover: in order to remove the statistical disparity from the data, we used the algorithm proposed by Zemel et al. <cit.>, results can be found in table <ref>.
§.§ Discussion
The Disparate Impact Remover (see table <ref>) with a repair scale of 1.0 achieved the lowest group skew for the three data samples by ensuring a complete overlap between the groups. The positive point about these techniques is that they preserve the ranking between groups which ensures less distortion in the transformed versions of the data (see figure <ref>). However, this technique results in the highest group skew differences between the transformed sample and the original which suggests that the outcomes will not be statistically accurate (see section <ref>), assuming that the original sample is representative.
For the equalised odds removers, we find that the FairBalance algorithm and its variance <cit.> achieved a low group skew on the three datasets. Specifically for this set of techniques, we observe that they change drastically the shape of the resulting distribution (see figure <ref>) except for dataset 3. For D3, the resulting transformed D3 is less distorted compared to the transformed versions of D1 or D2. D3 contained an initial group skew but the sample was perfectly balanced since the size of each group is equal and the probability of receiving a favourable outcome was equal for each group. This is an indication that the equalised odds removers are sensitive towards group imbalances. The more imbalanced the groups, the bigger the distortion in the transformed dataset.
The Statistical Disparity Remover <cit.> scored the worst among all techniques in decreasing the group skew except for the data sample 3, D3 where it scored better than all the equalised odds removers.
The distortions that can be observed in the transformed samples when using pre-processing techniques in Figures <ref>, <ref> and <ref> show an illustration of how much can a fairness guided learning process reflect a different reality, models receiving and tested on those transformed samples may suffer from a lack of generalizability where the test results are not reliable.
§ CONCLUSION
First, we hope to have shown that the fair machine learning community represents and advocates for a single conception of fairness, defined as equity in automated decisions. We suspect that the unipolar conception of fairness in the fair machine learning literature is due to the fact that the traditional machine learning approach has the potential to satisfy a meritocratic conception but can never satisfy an equitable conception in a data setting that contains group disparities. Our concern is that by only defining fairness as equity in machine learning, the community may be leading policy-makers and regulators to believe that fairness is absent in automated decisions without the use of the equitable, fair machine learning approach. Note that, throughout this article, we have neither advocated for nor disparaged any particular conception. We hope for an expansion in how fairness is conceived in machine learning, so that the literature can capture the same kind of diversity in opinion that is present in the wider societal discourse.
Second, we hope to have shown that the rejection of the trade-off between statistically accurate outcomes and group similar outcomes as an independent, external constraint has resulted in fallacious reasoning, misleading assertions, and/or questionable practices. Researchers should confront the reality that equitable outcomes require the introduction of inaccuracies. Admitting that there exists a trade-off does not mean that outcomes should not be equitable. We argue, though, that the research community should be more straightforward about what is being sacrificed in the name of equity. Obfuscating the nature of that sacrifice, for instance by redefining the term “bias" in machine learning, could be misleading policy-makers and regulators. As was once wisely said, “There are no solutions. There are only trade-offs" <cit.>.
To those ends and in an effort to foster transparency, we introduced experimental results to aid designers of automated decision-making systems in understanding the relationship between statistically accurate outcomes and group similar outcomes. Future work could use the conceptual and experimental understanding provided throughout this article in a variety of relevant disciplines: (1) data scientists might build a toolkit that would allow researchers and designers of automated decision procedures to incorporate goals and compromises into the machine learning pipeline, where primary and secondary goals are represented by chosen fairness metrics and distributions; (2) legal scholars might realize that, perhaps, affirmative action and positive action are the applicable legal doctrines rather than disparate impact and indirect discrimination, because fair machine learning techniques alter (act on), the decision process itself (i.e., the implementation of fair machine learning metrics is positive discrimination and so it must be determined whether that positive discrimination constitutes lawful or unlawful discrimination in a given jurisdiction and decision context); and (3) data ethicists might offer an alternative proxy for (un)fairness in the machine learning pipeline, other than group similarity (skew as the quotient of between-group and in-group distances), that could allow for a conception of fairness that is not based on equity.
Acknowledgments
The research presented in this paper has received funding from the European Union's funded project LeADS under Grant Agreement no. 956562. We would like to give special thanks to Gabriele Lenzini, Jean-Michel Loubes, and Maciej Zuziak for their advice and feedback throughout the writing process.
|
http://arxiv.org/abs/2306.05113v1
|
20230608112710
|
Zero-sum stopper vs. singular-controller games with constrained control directions
|
[
"Andrea Bovo",
"Tiziano De Angelis",
"Jan Palczewski"
] |
math.OC
|
[
"math.OC",
"math.PR",
"q-fin.MF",
"91A05, 91A15, 60G40, 93E20, 49J40"
] |
We consider a class of zero-sum stopper vs. singular-controller games in which the controller can only act on a subset d_0<d of the d coordinates of a controlled diffusion. Due to the constraint on the control directions these games fall outside the framework of recently studied variational methods. In this paper we develop an approximation procedure, based on L^1-stability estimates for the controlled diffusion process and almost sure convergence of suitable stopping times. That allows us to prove existence of the game's value and to obtain an optimal strategy for the stopper, under continuity and growth conditions on the payoff functions.
This class of games is a natural extension of (single-agent) singular control problems, studied in the literature, with similar constraints on the admissible controls.
Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media
Thales Bertaglia1, 30000-0003-0897-4005 Stefan Huber Catalina Goanta20000-0002-1044-9800 Gerasimos Spanakis10000-0002-0799-0241 Adriana Iamnitchi10000-0002-2397-8963
July 31, 2023
=========================================================================================================================================================================
§ INTRODUCTION
A zero-sum stopper vs. singular-controller game can be formulated as follows. Given a time horizon T∈(0,∞), two players observe a stochastic dynamics X=(X_t)_t∈[0,T] in ^d described by a controlled stochastic differential equation (SDE). One player (the minimiser) may exert controls that impact additively on the dynamics and that may be singular with respect to the Lebesgue measure, as functions of time. The other player (the maximiser) decides when the game ends by selecting a stopping time in [0,T]. At the end of the game, the first player (controller) pays the second one (stopper) a payoff that depends on time, on the sample paths of X and on the amount of control exerted. A natural question is whether the game admits a value, i.e., if the same expected payoff is attained irrespective of the order in which the players choose their (optimal) actions.
In <cit.> we studied zero-sum stopper vs. singular-controller games in diffusive setups with controls that can be exerted in all d coordinates of the process X. The approach is based on a mix of probabilistic and analytic methods for the study of a class of variational inequalities with so-called obstacle and gradient constraints. It is shown that the value of the game is the maximal solution of such variational inequality. More precisely, it is the maximal strong solution in the sense that it belongs to the Sobolev space of functions that admit two spatial derivatives and one time derivative, locally in L^p (i.e., in W^1,2,p_ℓ oc). The methods rely crucially on the assumption that all coordinates of the process can be controlled. Indeed, that determines a particular form of the gradient constraint that enables delicate PDE estimates for a-priori bounds on the solution. When only d_0<d coordinates are controlled, i.e., there is a constraint on the control directions, the results form <cit.> are not applicable and the existence of a value is an open question.
In this paper, we continue our study of zero-sum stopper vs. singular-controller games by showing that even in the case d_0<d the game admits a value. We also provide an optimal strategy for the stopper and we observe that it is of a slightly different form compared to the one obtained in <cit.> (see Remark <ref> below for details). The line of proof follows an approximation procedure, governed by a parameter γ∈[0,1], by which we relax the constraints on the class of admissible controls. For γ=1 we are in the same setting as in <cit.>, whereas γ=0 corresponds to the constrained case. It turns out that for γ∈(0,1) we have an intermediate situation for which a suitable adaptation of the arguments from <cit.> is possible. The idea is then to obtain the value of the constrained game in the limit as γ↓ 0.
When letting γ↓ 0 we need L^1-stability estimates for the controlled dynamics. These estimates involve local times and a-priori bounds on the candidate optimal controls and they are not standard in the literature. Optimality of the stopper's strategy is derived via an almost sure convergence for a suitable sequence of stopping times, based on path properties of the controlled dynamics and uniform convergence of the approximating value functions as γ↓ 0. We can no longer guarantee the solvability of the associated variational problem in the strong (Sobolev) sense but, of course, our value function satisfies both the appropriate gradient constraint and obstacle constraint. Moreover, we show that the value of our game is the uniform limit of solutions of approximating variational inequalities, paving the way to a notion of solution in the viscosity sense. Finally, we notice that our results hold under continuity and (sub)linear growth conditions on the payoff functions. These are much weaker conditions than those needed in <cit.>, where continuous differentiability in time and space and Hölder continuity of the derivatives is required.
The motivation for considering constrained control directions arises from the literature on (single-agent) irreversible or partially reversible investment problems. In the classical paper <cit.>, Soner and Shreve consider a d-dimensional Brownian motion whose d-th coordinate is singularly controlled. Various works by Zervos et al. (e.g., <cit.>), Guo and Tomecek <cit.>, Federico et al. (e.g., <cit.>, Ferrari (e.g., <cit.>), De Angelis et al. (e.g., <cit.>) consider 2- or 3-dimensional dynamics with only one controlled coordinate. We also notice that in those papers the controlled process X is fully degenerate in the controlled dynamics (i.e., there is no diffusion in the control direction). In all cases but <cit.> and <cit.> that assumption enables an explicit solution of the problem, because the resulting free boundary problems are cast as families of ODEs parametrised by the state variable associated to the control. A non-degenerate example arises instead in mathematical finance in the paper by Bandini et al. <cit.> who deal with a 2-dimensional diffusive dynamics with only one controlled coordinate. It seems therefore natural that game versions of similar problems should be studied in detail and we provide the first results in this direction.
The literature on controller vs. stopper games has been developing in various directions in the case of controls with bounded velocity (see, e.g., Bensoussan and Friedman <cit.>, Karatzas et al. <cit.>, Hamadene <cit.>, Bayraktar and Li <cit.>, among others). A more detailed review of the main results in that direction is provided in the introduction of <cit.>. Instead, the case of singular controls is widely unexplored. Prior to <cit.>, the only other contribution was by Hernandez-Hernandez et al. <cit.> (see also <cit.>), who studied the problem in a one-dimensional setting using free boundary problems in the form of ODEs with appropriate boundary conditions. The present paper contributes to the systematic study of zero-sum stopper vs. singular controller games while complementing and extending the classical framework with controls of bounded velocity.
Our paper is organised as follows. In Section <ref> we set up the problem, we explain the main technical difficulties that prevent the use of methods developed in <cit.> and we state the main result (Theorem <ref>). In Section <ref> we devise an approximation procedure and obtain stability estimates. Those are later used in Section <ref> to prove convergence of the value functions of the approximating problems to the original one. A technical appendix completes the paper.
§ SETTING AND MAIN RESULTS
Let (Ω,,) be a complete probability space, = (_s)_s∈[0,∞) a right-continuous filtration completed by the -null sets and (W_s)_s∈[0,∞) a -adapted, d'-dimensional Brownian motion. Fix T∈(0,∞), the horizon of the game. Let d ≤ d' be the dimension of the controlled diffusion process (X_s)_s∈[0,∞). We decompose d into two sets of coordinates: d = d_0 + d_1 with d_0, d_1 > 0. The first d_0 coordinates in the controlled dynamics are affected directly by singular controls. The remaining d_1 coordinates, instead, are affected indirectly via drift and diffusion coefficients. This is made rigorous in (<ref>) after we introduce the class of admissible controls.
For t∈[0,T], we denote
_t{τ|τ is a stopping time such that τ∈[0, T-t]}.
For a vector x ∈^d, |x|_d stands for the Euclidean norm of x and |x|_d_0 for the Euclidean norm of the first d_0 coordinates. We consider the following class of admissible controls
^d_0_t{(n,ν)|
(n_s)_s∈[0,∞) is progressively measurable, ^d-valued,
with n_s=(n^1_s,… n_s^d_0,0,0,… 0), ∀ s∈[0,∞)
and |n_s|_d=|n_s|_d_0=1, -a.s. ∀ s∈[0,∞);
(ν_s)_s∈[0,∞) is 𝔽-adapted, real valued, non-decreasing and
right-continuous with ν_0-=0, -a.s., and [|ν_T-t|^2]<∞
.}.
Analogously, we define the class ^d_t with the same properties as the one above but with n_s=(n^1_s,…, n^d_s) such that |n_s|_d=1, -a.s. The class ^d_t is the one used by <cit.>, where the control may act in all d coordinates. Instead, the class ^d_0_t is the one which we use in the present paper, where the control directions are constrained to a subspace of ^d.
Notice that for -a.e. ω, the map s↦ n_s(ω) is Borel-measurable on [0,T] and s↦ν_s(ω) defines a measure on [0,T]; thus the Lebesgue-Stieltjes integral ∫_[0,s]n_u(ω)ν_u(ω) is well-defined for -a.e. ω. The latter will be used below.
Given a control pair (n,ν)∈^d_0_t and an initial condition x∈^d, we consider a d-dimensional controlled stochastic dynamics (X_s^[n,ν])_s∈[0,∞) described by
X_s^[n,ν]= b(X_s^[n,ν]) s+ σ(X_s^[n,ν]) W_s+ n_s ν_s, X^[n,ν]_0-=x,
where b:^d→^d and σ:^d→^d× d' are continuous functions and X^[n,ν]_0- is the state of the dynamics before a possible jump at time zero.
We denote
_x( · )=( · |X^[n,ν]_0-=x) and _x[ · ]=[ · |X^[n,ν]_0-=x].
It is important to remark that the control acts only in the first d_0 coordinates of the dynamics of X^[n,ν]. However, the effect of such control is also felt by the remaining d_1 coordinates via the drift and diffusion coefficients. Under Assumption <ref> on b and σ (stated below), there is a unique (strong) -adapted solution of (<ref>) by, e.g., <cit.>.
We study a class of 2-player zero-sum games (ZSGs) between a (singular) controller and a stopper. The stopper picks τ∈_t and the controller chooses a pair (n,ν)∈^d_0_t. At time τ the game ends and the controller pays to the stopper a random payoff depending on τ and on the path of X^[n,ν] up to time τ. We denote the state space of the game by
^d+1_0,T:=[0,T]×^d.
Let continuous functions g,h:^d+1_0,T→ [0,∞), f:[0,T]→(0,∞), and a fixed discount rate r≥ 0 be given. For (t,x)∈^d+1_0,T, τ∈_t and (n, ν) ∈^d_0_t, the game's expected payoff reads
_t,x(n,ν,τ)= _x[e^-rτg(t+τ,X_τ^[n,ν])+∫_0^τ e^-rsh(t+s,X_s^[n,ν]) s +∫_[0,τ] e^-rsf(t+s) ν_s ].
We define the lower and upper value of the game respectively by
v(t,x)sup_τ∈𝒯_tinf_(n,ν)∈𝒜^d_0_t_t,x(n,ν,τ) and v(t,x)inf_(n,ν)∈𝒜^d_0_tsup_τ∈𝒯_t_t,x(n,ν,τ).
Then v(t,x)≤v(t,x) and if the equality holds we say that the game admits a value:
v(t,x)v(t,x)=v(t,x).
Before assumptions of the paper are formulated, we introduce necessary notation. Given a matrix M∈^d× d', with entries M_ij, i=1,… d, j=1,… d', we define its norm by
|M|_d× d'(∑_i=1^d∑_j=1^d'M_ij^2)^1/2,
and, if d=d', we let tr(M)∑_i=1^d M_ii. For x∈^d we use the notation x=(x_[d_0], x_[d_1]) with x_[d_0]=(x_1,… x_d_0) and x_[d_1]=(x_d_0+1,… x_d). Given a smooth function φ:^d+1_0,T→ we denote its partial derivatives by
∂_t φ, ∂_x_iφ, ∂_x_ix_jφ, for i,j=1,… d. We write ∇φ=(∂_x_1φ,…∂_x_dφ) for the spatial gradient, and D^2 φ = (∂_x_i x_jφ)_i,j=1^d for the spatial Hessian matrix. The first d_0 coordinates of the gradient ∇φ are denoted by ∇^0 φ=(∂_x_1φ,…∂_x_d_0φ) and the remaining d_1 coordinates are denoted by ∇^1φ=(∂_x_d_0+1φ,…∂_x_dφ).
We now give assumptions under which we obtain our main result, Theorem <ref>.
[Controlled SDE]
The functions b and σ are such that:
(i) They are continuously differentiable on ^d with derivatives bounded by D_1>0;
(ii) For i=1,… d and σ^i=(σ_i1,…σ_id'), it holds σ^i(x)=σ^i(x_i);
(iii) For any bounded set B⊂^d there is θ_B>0 such that
⟨ζ,σσ^⊤ (x)ζ⟩≥θ_B|ζ|_d^2 for any ζ∈^d and all x∈B,
where ⟨·, ·⟩ denotes the scalar product in ^d and B the closure of B.
Notice that the Lipschitz continuity of b and σ implies that there exists D_2 such that
|b(x)|_d+|σ(x)|_d× d'≤ D_2(1+|x|_d), for all x∈^d.
[Functions f,g,h]
The functions f:[0,T]→ (0,∞), g,h:^d+1_0,T→ [0,∞) are continuous, and:
(i) The function f is non-increasing;
(ii) There exists constants K_1∈(0,∞) and β∈[0,1) such that
0≤ g(t,x)+h(t,x)≤ K_1(1+|x|^β_d) for all (t,x)∈^d+1_0,T;
(iii) The function g is Lipschitz in the first d_0 spatial coordinates with a constant bounded by f in the sense that for every t ∈ [0, T], |∇^0g(t,x)|_d_0≤ f(t) for a.e. x∈^d.
Our first lemma shows that there is no loss of generality in restricting the class of admissible controls to those with bounded expectation uniformly in x in compact sets. The fact that inf_t∈[0,T]f(t) = f(T) >0 from Assumption <ref> is necessary in the proof.
There is a constant K_2>0 such that for any (t,x)∈^d+1_0,T
v(t,x)=inf_(n,ν)∈_t,x^d_0,optsup_τ∈_t_t,x(n,ν,τ),
v(t,x)=sup_τ∈_tinf_(n,ν)∈_t,x^d_0,opt_t,x(n,ν,τ),
where _t,x^d_0,opt{(n,ν)∈_t^d_0 | _x[ν_T-t]≤ K_2(1+|x|_d)}.
Let (e_1,0)∈^d_0_t be the null control, where e_1=(1,0,… 0)∈^d, and denote X=X^[e_1,0]. We have
v(t,x)≤ sup_τ∈_t^_t,x(e_1,0,τ)
=sup_τ∈_t_x[e^-rτg(t+τ,X_τ)+∫_0^τ e^-rsh(t+s,X_s) s]
≤ K_1(1+T)(1+_x[sup_s∈[0,T]e^-rs|X_s|_d])≤ C_1(1+|x|_d),
where the second inequality uses the sublinear growth of g and h, the third inequality is by standard estimates for SDEs with linearly growing coefficients (<cit.>). The constant C_1>0 depends only on T, D_2 and K_1 from (<ref>) and Assumption <ref>(ii), respectively. Since 0 ≤v, there is no loss of generality in restricting admissible controls in v to the class
^d_0,sub_t,x{(n,ν)∈^d_0_t| sup_τ∈_t_t,x(n,ν,τ)≤ C_1(1+|x|_d)}.
A similar argument applies for the lower value v: as in (<ref>), for any fixed (t, x) and τ,
inf_(n,ν)∈_t,x^d_0_t,x(n,ν,τ)
= inf_(n,ν)∈_t,x^d_0,sub_t,x(n,ν,τ),
so one can also restrict controls to _t,x^d_0,sub in the definition of v.
It remains to show that ^d_0, sub_t,x⊆^d_0, opt_t,x. To this end, recall that f>0 and it is non-increasing in time. For (n,ν)∈^d_0,sub_t,x we have
_x[|ν_T-t|] ≤_x[(min_s∈[0,T-t]f(t+s))^-1∫_[0,T-t]f(t+s) ν_s]
= 1/f(T)_x[∫_[0,T-t]f(t+s) ν_s]
≤e^r(T-t)/f(T)_x[e^-r(T-t)g(T,X_T^[n,ν])+∫_0^T-te^-rsh(t+s,X_s^[n,ν]) s+∫_[0,T-t]e^-rsf(t+s)ν_s]
≤e^rT/f(T)_t,x(n,ν,T-t),
where the equality uses that f non-increasing in time and the second and third inequalities follow from f,g,h≥ 0. Using (<ref>), we have
_x[|ν_T-t|]≤e^rT/f(T)sup_τ∈_t_t,x(n,ν,τ) ≤e^rTC_1/f(T)(1+|x|_d) K_2(1+|x|_d).
This concludes the proof because ^d_0, sub_t,x⊆^d_0, opt_t,x⊆^d_0_t,x and in the first part of the proof we have shown that ^d_0_t,x can be replaced by ^d_0, sub_t,x in the definitions of v and v.
In order to avoid heavy notation we will identify ^d_0,opt_t,x=^d_0_t,x and recall this fact whenever necessary.
The next theorem is the main result of the paper. Its proof builds on an approximation procedure that allows us to invoke PDE results from <cit.>. By passing to the limit in the approximation scheme we recover the value function of our game. Details of the scheme and the convergence are presented in the next sections of the paper.
Under Assumptions <ref> and <ref>, the game described above admits a value v (i.e., (<ref>) holds) with the following properties:
(i) v is continuous on ^d+1_0,T;
(ii) |v(t,x)|≤ c(1+|x|_d^β) for some c>0 and for β∈(0,1) from Assumption <ref>(ii);
(iii) v is Lipschitz continuous in the first d_0 spatial variables with constant bounded by f in the sense that |∇^0 v(t,x)|_d_0≤ f(t) for a.e. (t,x)∈^d+1_0,T.
Moreover, for any given (t,x)∈^d+1_0,T and any admissible control (n,ν)∈^d_0_t, the stopping time θ_*∈_t
is optimal for the stopper, where θ_*=τ_*∧σ_* and _x-a.s.
τ_*inf{s≥ 0 | v(t+s,X_s^[n,ν])=g(t+s,X_s^[n,ν])},
σ_*inf{s≥ 0 | v(t+s,X_s-^[n,ν])=g(t+s,X_s-^[n,ν])}.
Notice that, since v(T, x) = g(T,x), the set {s≥0 | v(t+s,X_s^[n,ν])=g(t+s,X_s^[n,ν])} always contains T-t. So the stopping time θ_* is bounded from above by T-t.
The stopper's strategy θ_* is of a closed-loop type, i.e., the stopping time θ_* depends on the dynamics of the underlying process X^[n, ν]. Optimality of θ_*, asserted above, should be understood in the sense that for any admissible control (n,ν)∈^d_0_t, we have
v(t,x) ≤_t,x(n,ν, θ_*), (t,x) ∈^d+1_0,T.
The results in the theorem above continue to hold in the non-constrained case d_0=d. That proves existence of a value under less stringent regularity conditions on g,h than in <cit.> and when f is independent of the spatial coordinate. Notice that for d=d_0 the approximation via functions (u^γ)_γ>0 described in Section <ref> is not needed. The rest of the analysis follows the same steps as in Section <ref> taking γ=1 and ignoring the arguments about the limit as γ→ 0 (i.e., skipping Section <ref>).
The stopping time τ_* is shown to be optimal for the game studied in <cit.>. Theorem <ref> asserts the optimality of θ_* which is the minimum of τ_* and another stopping time σ_*. This construction comes at no disadvantage as θ_* is also optimal in the setting of <cit.> (see Lemma <ref>). It however enables us to prove convergence of optimal stopping times in the form of θ_* for games with value functions converging uniformly on compacts (see Lemma <ref> and the proof of Theorem <ref>). Note that one cannot expect such convergence to hold for τ_*.
§.§ Challenges in the constrained setup
The theory developed in <cit.> does not cover the game we are considering here for two essential reasons. The first one is that the functions f,g,h are only assumed to be continuous, whereas <cit.> requires continuous differentiability once in time and twice in space (and Hölder continuity of all derivatives). The second one, and more important, is that the constraints on the directions of the admissible control imply that estimates obtained in <cit.> via analytical arguments can no longer be obtained. In the next paragraphs we briefly elaborate on this fine technical issue.
The variational problem in <cit.> features a gradient constraint on the value function v of the form |∇ v|_d≤ f. In the penalisation procedure adopted in <cit.> we therefore consider a semi-linear PDE with a non-linear term of the form ^d∋ p↦ψ_(|p|_d^2-f^2) (see Eq. (5.14) in <cit.>), where >0 is a parameter that must tend to zero in the limit of the penalisation step. In our current setup, given that the control only acts in the first d_0 coordinates, the gradient constraint must be of the form |∇^0 v|_d_0≤ f. That translates into a non-linear term of the form ^d∋ p↦ψ_(|p_[d_0]|_d_0^2-f^2) in the associated penalised problem.
One of the key estimates in <cit.> is obtained in <cit.> and it concerns a bound on the gradient of the solution of the penalised problem. The method of proof adopted in <cit.> is also used in other places, e.g., in <cit.>. We now show where those arguments fail.
Arguing as in the proof of <cit.>, we arrive at an equation which is the analogue of <cit.> and it reads:
-2⟨∇ w^n,∇ (|∇^0 u^n|^2_d_0- f_m^2)⟩≤ -2λ |∇^0 u^,δ|_d_0^2+R̃_n.
Above, it is enough to understand that f_m is an approximation of f, while w^n and u^n both approximate the solution u^,δ of the penalised problem.
The term R̃_n is a remainder which can be made arbitrarily small and it plays no substantial role in this discussion. Continuing with the argument that follows <cit.> we arrive at
λ|∇^0 u^,δ|^2_d_0≤α_1 |∇ u^,δ|^2_d+α_2,
where α_1,α_2>0 are given constants and λ>0 can be chosen arbitrarily. From this estimate we cannot conclude that |∇ u^,δ| is bounded. Instead of λ|∇^0 u^,δ|^2_d_0, in <cit.> we have λ|∇ u^,δ|^2_d, which leads to λ|∇ u^,δ|^2_d≤α_1 |∇ u^,δ|^2_d+α_2 and it allows to conclude |∇ u^,δ|^2_d≤ c for a fixed constant c>0, by arbitrariness of λ.
Other difficulties of a similar nature appear in, e.g., adapting the arguments of <cit.>, where in Eq. (5.34) we would not be able to obtain a bound on |D^2w^n|_d× d^2, because we cannot control the derivatives ∂_x_ix_jw^n for i,j=d_0+1,… d. We avoid going into further detail and refer the interested reader to the original paper for a careful comparison.
§.§ Notation
Before passing to the proof of Theorem <ref>, we introduce the remaining notation used in the paper.
For vectors u,v∈^d their scalar product is denoted by ⟨ u,v ⟩. The d-dimensional open ball centred in 0 with radius m is denoted by B_m. For an arbitrary subset D⊆^d+1_0,T we let C^∞_c, sp(D) be the space of functions on D with compact support in the spatial coordinates (not in time) and infinitely many continuous derivatives. For an open bounded set ⊂^d+1_0,T we let C^0() be the space of continuous functions φ:→ equipped with the supremum norm
φ_C^0()sup_(t,x)∈|φ(t,x)|.
Analogously, C^0(^d+1_0,T) is the space of bounded and continuous functions φ:^d+1_0,T→ equipped with the norm φ_∞φ_C^0(^d+1_0,T) as in (<ref>) but with replaced by ^d+1_0,T.
We denote by C^0,1,α() the space of α-Hölder continuous functions on with α-Hölder continuous spatial gradient, equipped with the supremum norm and the α-Hölder semi-norm. The semi-norm is evaluated with respect to the parabolic distance; for details see <cit.> (see also the notation section in <cit.>). The space of functions with bounded C^0,1,α-norm in any compact subset of ^d+1_0,T is denoted by C^0,1,α_ℓ oc(^d+1_0,T). For p∈[1,∞], we recall the definition of the usual Sobolev space (see <cit.>):
W^1,2,p_ℓ oc(^d+1_0,T) { f∈ L^p_ℓ oc(^d+1_0,T) | f∈ W^1,2,p(), ∀⊆^d+1_0,T, bounded}.
The infinitesimal generator of the uncontrolled process X^[e_1,0] (where e_1=(1,0,…,0)∈^d) is denoted by ℒ and it reads
(ℒφ)(x)=1/2tr(a(x)D^2φ(x))+⟨ b(x),∇φ(x)⟩, for φ∈ C^∞(^d),
with a(x)(σσ^⊤)(x).
§ FORMULATION OF THE APPROXIMATING PROBLEMS AND STABILITY ESTIMATES
In this section we assume stronger conditions than in Assumption <ref> for the sake of simplicity of exposition. These will be relaxed in Section <ref>. In particular, throughout this section we enforce
The functions f:[0,T]→(0,∞), g,h:^d+1_0,T→[0,∞) are such that:
(i) g ∈ C^∞_c, sp(^d+1_0,T) and h∈ C^∞_c, sp(^d+1_0,T);
(ii) f∈ C^∞([0,T]), non-increasing and strictly positive;
(iii) the condition
|∇^0 g(t,x)|_d_0≤ f(t),
holds for all (t,x)∈^d+1_0,T.
We notice that the assumptions of infinite continuous differentiability and compact support imply the existence of a constant K∈(0,∞) such that:
(iv) f, g and h are bounded and, for all 0≤ s< t≤ T and all x,y∈^d,
|g(t,x)-g(s,y)|+|h(t,x)-h(s,y)|≤ K(|x-y|_d+(t-s));
(v) For all (t,x)∈^d+1_0,T
(h+∂_tg+ℒg-rg)(t,x)≥ -K.
Therefore, our Assumption <ref> implies Assumption 3.2 in <cit.>. Moreover, our Assumption <ref> implies Assumption 3.1 in <cit.>. We cannot yet apply <cit.> because of the degeneracy of the control discussed above. In the next subsection we devise an approximation procedure that takes care of this issue.
§.§ Approximation procedure
Fix γ∈(0,1). Given (n,ν)∈^d_t, we consider the controlled stochastic differential equation (SDE)
X_s^[n,ν],γ= b(X_s^[n,ν],γ) s+ σ(X_s^[n,ν],γ) W_s+ n^γ_s ν_s,
where n^γ_s:=(n^1_s,…,n^d_0_s,γ n^d_0+1_s,…,γ n^d_s) (i.e., the parameter γ acts as a weight on the last d_1 coordinates of n_s).
Given vectors p,q∈^d, recalling the notation p=(p_[d_0],p_[d_1])∈^d_0×^d_1 and the scalar product ⟨ p,q⟩ in ^d, we introduce the bilinear form ⟨·,·⟩_γ:^d×^d→ defined as
⟨ p,q⟩_γ⟨ p_[d_0],q_[d_0]⟩+γ⟨ p_[d_1],q_[d_1]⟩.
Notice that we are slightly abusing the notation because ⟨ p_[d_0],q_[d_0]⟩ and ⟨ p_[d_1],q_[d_1]⟩ are scalar products in ^d_0 and ^d_1, respectively.
Associated with ⟨·,·⟩_γ we have the norm
|p|_γ√(⟨ p,p⟩_γ) on ^d.
It is worth noticing that
∇ |p|^2_γ=2 (p_1,…, p_d_0,γ p_d_0+1,…, γ p_d)
and, for j=1,…, d, we clearly have (D^2|p|^2_γ)_ij=2δ_ij for i=1,… d_0 and (D^2|p|^2_γ)_ij=2γδ_ij for i=d_0+1,… d, where δ_ij is the Kronecker delta.
We introduce an approximation of f as
f^γ(t)√(f^2(t)+γ K^2) for t∈[0,T],
where K is the same as in (<ref>). By construction f^γ→ f uniformly on [0,T] as γ→0 and we also notice that (<ref>), (<ref>) and (<ref>) imply
|∇ g(t,x)|_γ^2= |∇^0 g(t,x)|_d_0^2+γ |∇^1 g(t,x)|_d_1^2≤ f^2(t)+γ K^2=(f^γ(t))^2,
for all (t,x)∈^d+1_0,T.
We consider a new payoff _t,x^γ similar to the payoff defined in (<ref>), with X_t^[n,ν] and f therein replaced by X_t^[n,ν],γ and f^γ, respectively, i.e.,
_t,x^γ(n,ν,τ)=_x[e^-rτg(t+τ,X_τ^[n,ν],γ)+∫_0^τ e^-rsh(t+s,X_s^[n,ν],γ) s+∫_[0,τ] e^-rsf^γ(t+s)ν_s ].
We say that the game with expected payoff (<ref>) admits a value if
u^γ(t,x)=sup_τ∈_tinf_(n,ν)∈^d_t_t,x^γ(n,ν,τ)= inf_(n,ν)∈^d_tsup_τ∈_t_t,x^γ(n,ν,τ).
The variational inequality that identifies the value of the game is the following:
min{max{∂_t u+ u-ru+h,g-u},f^γ-|∇ u|_γ}=0, a.e. in ^d+1_0,T,
max{min{∂_t u+ u-ru+h,f^γ-|∇ u|_γ},g-u}=0, a.e. in ^d+1_0,T,
with terminal condition u(T,x)=g(T,x) and growth condition |u(t,x)|≤ c(1+|x|_d), for a suitable c>0. A simple adaptation of the results from <cit.> leads to the next theorem. Details of the changes to the original proof of <cit.> are given in Appendix for completeness.
The game described above admits a value (i.e., (<ref>) holds) and the value function u^γ is the maximal solution of (<ref>) in the class W^1,2,p_ℓ oc(^d+1_0,T) for all p∈[1,∞). Moreover, for any given (t,x)∈^d+1_0,T and any admissible control (n,ν)∈^d_t, the stopping time defined _x-a.s. as
τ_*^γinf{s≥0 | u^γ(t+s,X_s^[n,ν],γ)=g(t+s,X_s^[n,ν],γ)}
is optimal for the stopper.
Thanks to the boundedness and positivity of f,g,h, the value function of the game u^γ is bounded and non-negative. The upper bound is obtained by taking the sub-optimal control (n,ν)≡(e_1,0) with e_1=(1,0,…, 0). In turn, by the maximality of u^γ across the solutions of (<ref>), we have that any solution of (<ref>) in W^1,2,p_ℓ oc(^d+1_0,T) is bounded.
The next lemma is an analogue of Lemma <ref>. Its proof is identical, up to substituting f with f^γ and X^[n,ν] with X^[n,ν],γ, and it is therefore omitted. Thanks to the inequality f^γ≥ f, the constant K_2 may be taken the same as in ^d_0, opt_t,x.
There is a constant K_2>0 (independent of γ) such that, for any (t,x)∈^d+1_0,T,
u^γ(t,x)=inf_(n,ν)∈_t,x^d,optsup_τ∈_t^γ_t,x(n,ν,τ),
where _t,x^d,opt{(n,ν)∈_t^d | _x[ν_T-t]≤ K_2(1+|x|_d)}.
The family of stopping times (τ^γ_*)_γ>0 is optimal for the stopper in the corresponding family of games with values (u^γ)_γ>0. However, it turns out that studying the convergence of τ^γ_* for γ↓ 0 is not an easy task. For that reason we introduce another family of stopping times and we prove some of its useful properties.
For γ>0 and (n,ν)∈^d_t, let
σ_*^γinf{s≥ 0| u^γ(t+s,X_s-^[n,ν],γ)-g(t+s,X_s-^[n,ν],γ)=0},
and define
θ^γ_*:=τ^γ_*∧σ^γ_*.
Notice that given (t,x)∈^d+1_0,T and (n,ν)∈^d_t the stopping time depends on both (t,x) and (n,ν) via the controlled dynamics X^[n,ν],γ (Remark <ref>). Therefore we sometimes use the notation θ^γ_* = θ^γ_* (t,x; n, ν).
The next lemma shows that θ^γ_* is optimal for the stopper in the game with value u^γ.
Fix (t,x)∈^d+1_0,T. For any (n,ν)∈^d_t, we have
u^γ(t,x)≤^γ_t,x(n,ν,θ^γ_*).
Furthermore,
u^γ(t,x)=inf_(n,ν)∈^d_t^γ_t,x(n,ν,θ^γ_*),
hence θ^γ_* is optimal for the stopper in the game with value u^γ.
With no loss of generality we assume that the set
_γ={(t,x)∈^d+1_0,T:u^γ(t,x)> g(t,x)}
is not empty. If it were empty then θ^γ_*=0 and the lemma would trivially hold.
Next we adapt an argument from the verification result for singular control, <cit.>, to overcome the lack of smoothness of u^γ.
Let (ζ_k)_k∈ be a standard family of mollifiers and consider the sequence (w_k^γ)_k∈⊂ C^∞(^d+1_0,T) where w_k^γ u^γ*ζ_k. Since u^γ∈ W^1,2,p_ℓ oc(^d+1_0,T)↪ C^0,1,α_ℓ oc(^d+1_0,T) for p > d+2 and some α∈ (0,1), we have w_k^γ→ u^γ and ∇ w_k^γ→∇ u^γ uniformly on compact sets, as k→∞; moreover, ∂_t w_k^γ→∂_t u^γ and D^2 w_k^γ→ D^2 u^γ strongly in L^p_ℓ oc(^d+1_0,T) for all p∈[1,∞), as k→∞ (see, e.g., arguments in Thm. 5.3.1 and Appendix C.4 in <cit.>). For notational simplicity, denote the operator (∂_t+-r) by .
Standard calculations based on integration by parts yield
∂_t w_k^γ= (∂_t u^γ)*ζ_k, ∂_x_j w_k^γ= (∂_x_j u^γ)*ζ_k, ∂_x_ix_j w_k^γ =(∂_x_ix_j u^γ)*ζ_k,
and therefore
| (u^γ*ζ_k)(t,x)-(w_k^γ)(t,x)|
=|∫_^d+1_0,T(∑_i,j=1^d(a_ij(y)-a_ij(x))∂_x_ix_ju^γ(s,y)+∑_i=1^d(b_i(y)-b_i(x))∂_x_iu^γ(s,y))ζ_k(t-s,x-y) s y|.
Since first and second order derivatives of u^γ belong to L^p_ℓ oc(^d+1_0,T) for any p∈[1,∞), then Hölder's inequality and continuity of a and b yield for any compact Σ⊂^d+1_0,T
lim_k→∞sup_(t,x)∈Σ|(u^γ*ζ_k)(t,x)-(w_k^γ)(t,x)|=:lim_k→∞Q_k^Σ =0.
Since u^γ is solution of (<ref>), we have ( u^γ+h)(t,x)≥ 0 for almost every (t,x)∈_γ and therefore
χ^γ_k(t,x):=(( u^γ+h)*ζ_k)(t,x)≥ 0,
for all (t,x)∈_γ. Finally, denoting h_k = h * ζ_k, we have
lim_k→∞sup_(t,x)∈Σ|h_k(t,x)-h(t,x)|=:lim_k→∞M^Σ_k=0.
From (<ref>), (<ref>) and (<ref>) we have
lim inf_k→∞inf_(t,x)∈Σ∩ _γ((w_k^γ)(t,x)+h(t,x))
≥lim inf_k→∞(inf_(t,x)∈Σ∩ _γχ^γ_k(t,x)-Q^Σ_k-M^Σ_k) ≥ 0.
Let ρ_m=inf{s≥ 0 | X^[n,ν],γ_s∉ B_m}∧(T-t). By an application of Dynkin's formula we obtain
w_k^γ(t,x)= _x[ e^-r(θ^γ_*∧ρ_m)w_k^γ(t+θ_*^γ∧ρ_m,X_θ_*^γ∧ρ_m^[n,ν],γ)-∫_0^θ_*^γ∧ρ_me^-rsw_k^γ(t+s,X_s^[n,ν],γ) s
-∫_0^θ_*^γ∧ρ_me^-rs⟨∇ w_k^γ(t+s,X_s-^[n,ν],γ), n_s⟩_γ ν_s^c
-∑_s≤θ_*^γ∧ρ_me^-rs∫_0^Δν_s⟨∇ w_k^γ(t+s,X_s-^[n,ν],γ+λ n_s),n_s⟩_γ λ]
= _x[ e^-r(θ^γ_*∧ρ_m)w_k^γ(t+θ_*^γ∧ρ_m,X_θ_*^γ∧ρ_m-^[n,ν],γ)-∫_0^θ_*^γ∧ρ_me^-rsw_k^γ(t+s,X_s^[n,ν],γ) s
-∫_0^θ_*^γ∧ρ_me^-rs⟨∇ w_k^γ(t+s,X_s-^[n,ν],γ), n_s⟩_γ ν_s^c
- ∑_s<θ_*^γ∧ρ_me^-rs∫_0^Δν_s⟨∇ w_k^γ(t+s,X_s-^[n,ν],γ+λ n_s),n_s⟩_γ λ],
where the second equality, which removes the contribution to w_k^γ of the final jump of X^[n,ν],γ at θ_*^γ∧ρ_m, follows from
w_k^γ(t+θ_*^γ∧ρ_m,X_θ_*^γ∧ρ_m^[n,ν],γ)
=
w_k^γ(t+θ_*^γ∧ρ_m,X_θ_*^γ∧ρ_m-^[n,ν],γ)
+ ∫_0^Δν_θ_*^γ∧ρ_m⟨∇ w_k^γ(t+s,X_θ_*^γ∧ρ_m-^[n,ν],γ+λ n_θ_*^γ∧ρ_m),n_θ_*^γ∧ρ_m⟩_γ λ.
We expand w_k^γ(t+s,X_s^[n,ν],γ) as (w_k^γ + h)(t+s,X_s^[n,ν],γ) - h(t+s,X_s^[n,ν],γ) and let k→∞. We apply the inequality (<ref>)) to the term (w_k^γ + h) and the dominated convergence theorem for the remaining terms, justified by the uniform convergence of (w_k^γ,∇ w_k^γ) to (u^γ,∇ u^γ) on compacts:
u^γ(t,x)≤_x[ e^-r(θ^γ_*∧ρ_m)u^γ(t+θ_*^γ∧ρ_m,X_θ_*^γ∧ρ_m-^[n,ν],γ)+∫_0^θ_*^γ∧ρ_me^-rsh(t+s,X_s^[n,ν],γ) s
-∫_0^θ_*^γ∧ρ_me^-rs⟨∇ u^γ(t+s,X_s-^[n,ν],γ), n_s⟩_γ ν_s^c
- ∑_s<θ_*^γ∧ρ_me^-rs∫_0^Δν_s⟨∇ u^γ(t+s,X_s-^[n,ν],γ+λ n_s),n_s⟩_γ λ].
Notice that _x(ρ_m<θ^γ_*)↓ 0 as m→∞. Then, in the limit as m→∞ the dominated convergence theorem yields (recall u^γ and h are bounded, |∇ u^γ|_γ≤ f^γ and _x [ν_T-t] < ∞)
u^γ(t,x)≤_x[ e^-rθ^γ_*u^γ(t+θ_*^γ,X_θ_*^γ-^[n,ν],γ)+∫_0^θ_*^γe^-rsh(t+s,X_s^[n,ν],γ) s
-∫_0^θ_*^γe^-rs⟨∇ u^γ(t+s,X_s-^[n,ν],γ), n_s⟩_γ ν_s^c
-∑_s<θ_*^γe^-rs∫_0^Δν_s⟨∇ u^γ(t+s,X_s-^[n,ν],γ+λ n_s),n_s⟩_γ λ].
On the event {τ^γ_*≤σ^γ_*} we have
u^γ(t+θ_*^γ,X_θ_*^γ-^[n,ν],γ)
=u^γ(t+τ_*^γ,X_τ_*^γ-^[n,ν],γ)
=u^γ(t+τ_*^γ,X_τ_*^γ^[n,ν],γ) - ∫_0^Δν_τ_*^γ⟨∇ u^γ(t+τ_*^γ,X_τ_*^γ-^[n,ν],γ+λ n_τ_*^γ),n_τ_*^γ⟩_γ λ
=g(t+τ_*^γ,X_τ_*^γ^[n,ν],γ) - ∫_0^Δν_τ_*^γ⟨∇ u^γ(t+τ_*^γ,X_τ_*^γ-^[n,ν],γ+λ n_τ_*^γ),n_τ_*^γ⟩_γ λ
≤ g(t+τ_*^γ,X_τ_*^γ^[n,ν],γ) + f^γ(t + τ_*^γ) Δν_τ_*^γ,
where the third equality is by the definition of τ^γ_*, the continuity of u^γ and g, and the right-continuity of t↦ X^[n,ν],γ_t; the inequality follows from |∇ u^γ|_γ≤ f^γ. We insert the estimate (<ref>) into the expression under the expectation in (<ref>) and apply the bound |∇ u^γ|_γ≤ f^γ again to obtain
e^-rθ^γ_*u^γ(t+θ_*^γ,X_θ_*^γ-^[n,ν],γ) +∫_0^θ_*^γe^-rsh(t+s,X_s^[n,ν],γ) s
-∫_0^θ_*^γe^-rs⟨∇ u^γ(t+s,X_s-^[n,ν],γ), n_s⟩_γ ν_s^c
- ∑_s<θ_*^γe^-rs∫_0^Δν_s⟨∇ u^γ(t+s,X_s-^[n,ν],γ+λ n_s),n_s⟩_γ λ
≤ e^-rθ^γ_*g(t+θ_*^γ,X_θ^γ_*^[n,ν],γ) + ∫_0^θ_*^γe^-rsh(t+s,X_s^[n,ν],γ) s + ∫_[0,θ^γ_*]e^-rsf^γ(t+s)ν_s.
On the event {σ^γ_*<τ^γ_*}, the arguments are more involved. We start from showing that
u^γ(t+σ_*^γ,X_σ_*^γ-^[n,ν],γ)=g(t+σ_*^γ,X_σ_*^γ-^[n,ν],γ).
Since σ^γ_* <τ^γ_* ≤ T-t, we have (u^γ-g)(t+σ_*^γ,X_σ_*^γ^[n,ν],γ) > 0. The process
s ↦ (u^γ - g)(t+s, X_s^[n,ν],γ)
is right-continuous due to continuity of u^γ and g and right-continuity of t↦ X_t^[n,ν],γ. Using this fact we deduce that for _x-almost every ω there is (ω), δ(ω) > 0 such that
(u^γ - g)(t+s, X_s^[n,ν],γ) > (ω), ∀ s ∈ [σ^γ_*, σ^γ_* + δ(ω)].
This means that (σ^γ_*, σ^γ_* + δ(ω)] ∩{ s ≥ 0 | (u^γ - g)(t+s, X_s-^[n,ν],γ) = 0 } = ∅. Hence, by the definition of σ_*^γ, we conclude that (u^γ - g)(t+σ_*^γ, X_σ_*^γ-^[n,ν],γ) = 0.
We now rewrite
u^γ(t+θ_*^γ,X_θ_*^γ-^[n,ν],γ) = u^γ(t+σ_*^γ,X_σ_*^γ-^[n,ν],γ)
= g(t+σ_*^γ,X_σ_*^γ-^[n,ν],γ)
= g(t+σ_*^γ,X_σ_*^γ^[n,ν],γ)
- ∫_0^Δν_σ_*^γ⟨∇ g(t+σ_*^γ,X_σ_*^γ-^[n,ν],γ+λ n_σ_*^γ),n_σ_*^γ⟩_γ λ
≤ g(t+σ_*^γ,X_σ_*^γ^[n,ν],γ) + f^γ(t + σ_*^γ) Δν_σ_*^γ,
where in the first line we use the identity u^γ(t+σ_*^γ,X_σ_*^γ-^[n,ν],γ)=g(t+σ_*^γ,X_σ_*^γ-^[n,ν],γ) proved above and in the last line the bound |∇ g|_γ≤ f^γ. We insert the estimate (<ref>) into the expression under the expectation on the right-hand side of (<ref>) and apply the bound |∇ u^γ|_γ≤ f^γ to obtain (<ref>).
Now, substituting (<ref>) inside the expectation on the right-hand side of (<ref>) yields
u^γ(t,x) ≤_x[e^-rθ_*^γg(t+θ_*^γ,X_θ_*^γ^[n,ν],γ)+∫_0^θ_*^γe^-rsh(t+s,X_s^[n,ν],γ) s+∫_[0,θ_*^γ]e^-rs f^γ(t+s) ν_s ]
=_t,x^γ(n,ν,θ^γ_*),
which proves (<ref>).
By arbitrariness of the pair (n,ν)∈^d_t we conclude
u^γ(t,x)≤inf_(n,ν)∈^d_t_t,x^γ(n,ν,θ^γ_*)≤ u^γ(t,x),
hence proving the second statement of the lemma.
When the sets {u^γ=g} and {|∇ u^γ|_γ=f} are disjoint, heuristic arguments based on classical verification theorems suggest that the controller and the stopper do not act simultaneously. In particular, this means that with no loss of generality we should be able to restrict the class of admissible pairs (n,ν) to those for which Δν_θ^γ_*=0 so that τ^γ_*(n,ν)=σ^γ_*(n,ν). This type of analysis is left for future work on more concrete examples.
§.§ Some stability estimates
We next provide a stability estimate in L^1 for the approximating process. The proof uses a generalisation of <cit.> which is given as Lemma <ref> in Appendix for completeness.
Fix (t,x)∈^d+1_0,T and a treble [(n,ν),τ]∈^d_t×_t. Then, there exists (n̅,ν̅)∈𝒜^d_0_t such that
_x[|X_τ^[n,ν],γ-X_τ^[n̅,ν̅]|_d]≤γ K_3_x[ν_T-t],
where K_3>0 is a constant depending only on d, D_1 and T.
For each pair (n,ν)∈^d_t, setting n_s=n(s)=(n_[d_0](s),n_[d_1](s))∈^d_0×^d_1, we can define a pair (n̅,ν̅)∈^d_0_t as follows: for i=1,… d_0, we set
n̅^i_s=n^i_s/|n_[d_0](s)|_d_0, if |n_[d_0](s)|_d_0≠0,
n̅^i_s=(1,0,…, 0), if |n_[d_0](s)|_d_0=0;
ν̅_s=∫_0^s|n_[d_0](r)|_d_0 ν_r,
and n̅^i_s = 0, i=d_0+1, …, d. By construction the process (n̅_s)_s∈[0,∞)∈^d is progressively measurable, hence ν̅ is adapted, right-continuous and non-decreasing with
∫_0^s n̅^i_rν̅_r=∫_0^s n^i_rν_r
for all s∈[0,T-t] and i=1,…, d_0.
Fix an arbitrary τ∈_t and (n,ν)∈^d_t. Let
τ_Rinf{s≥ 0 | |X_s^[n,ν],γ|_d∨|X^[n̅,ν̅]_s|_d≥ R},
and denote the stopped processes (X^[n,ν],γ_s∧τ∧τ_R)_s≥ 0 and (X^[n̅,ν̅]_s∧τ∧τ_R)_s≥ 0 by (X^γ,R_s)_s ≥ 0 and (X^R_s)_s ≥ 0, respectively. Let J^ γ,R X^γ,R-X^R and notice that J^ γ,R is a càdlàg semimartingale. To further simplify notation we set J=J^ γ,R for as long as γ and R are fixed. For each i=1,…,d we denote the i-th coordinate of J by J^i. By Meyer-Itô formula for semimartingales (see[In <cit.>, the author considers a càdlàg semi-martingale X starting from X_0=x, whereas here we have X_0-=x. Thus, we must account for a possible jump at time zero when using <cit.>.] <cit.>), noting that J_0-=0 and that the jump part of the process J is of bounded variation, we have for s>0 and for i=1,… d,
|J^i_s|= ∫_[0,s∧τ∧τ_R](J^i_λ-) J^i,c_λ+L_s∧τ∧τ_R^0(J^i)+∑_0≤λ≤ s∧τ∧τ_R(|J^i_λ|-|J^i_λ-|)
where J^i,c is the continuous part of the process J^i, (y)=-1 for y<0, (y)=1 for y>0 and (0)=0. The process (L_t^0(J^i))_t≥ 0 is the semi-martingale local time at zero of (J^i_t)_t≥ 0.
Notice that J^i_λ=J^i_λ- for i=1,… d_0 and all λ≥ 0 because of (<ref>). Thus, fixing i ∈{1, …, d_0} and using the form of the dynamics of X^γ,R and X^R, we have
|J^i_s|= ∫_0^s∧τ∧τ_R(J^i_λ)(b^i(X_λ^γ,R)-b^i(X_λ^R)) λ
+∫_0^s∧τ∧τ_R(J^i_λ)(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i)) W_λ+L_s∧τ∧τ_R^0(J^i),
where we notice that in the diffusion coefficient of |J^i|, the functions σ^i depend only on the i-th coordinate X^γ,R;i and X^R;i, as per (ii) in Assumption <ref>. Taking expectation in the equation above and removing the martingale term (σ^i has a linear growth so it is bounded on compacts) we get
_x[|J^i_s|]= _x[∫_0^s∧τ∧τ_R(J^i_λ)(b^i(X_λ^γ,R)-b^i(X_λ^R)) λ+L_s∧τ∧τ_R^0(J^i)]
≤ _x[∫_0^s|b^i(X_λ^γ,R)-b^i(X_λ^R)| λ+L_s^0(J^i)]
≤ _x[D_1∫_0^s|J_λ|_d λ+L_s^0(J^i)],
where in the first inequality we extend the integrals up to time s and for the second one we use Lipschitz continuity of b^i with the constant D_1 from Assumption <ref>. In order to estimate the local time, we follow <cit.>, which we can apply because J^i is a continuous semimartingale: for arbitrary ∈(0,1)
_x[L_s^0(J^i)]≤ 4-2_x[∫_0^s(1_{J^i_λ∈[0,)}+1_{J^i_λ≥}e^1-J^i_λ/)(b^i(X_λ^γ,R)-b^i(X_λ^R)) λ]
+1/_x[∫_0^s1_{J^i_λ>}e^1-J^i_λ/(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i))^2 λ].
In order to estimate the final term above we are going to use that
(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i))^2 ≤ D_1^2 |J^i_λ|^2 ≤ 2R D_1^2|J^i_λ|,
because σ is Lipschitz by (i) in Assumption <ref> and |J^i|≤|J^γ,R|_d≤ 2R. Denote by I_ the last integral on the right-hand side of (<ref>) and pick ζ∈(12,1). We have
I_= 1/_x[∫_0^s1_{J^i_λ∈(,^ζ)}e^1-J^i_λ/(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i))^2 λ]
+1/_x[∫_0^s1_{J^i_λ≥^ζ}e^1-J^i_λ/(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i))^2 λ]
≤ 1/_x[D_1^2∫_0^s1_{J^i_λ∈(, ^ζ)} |J^i_λ|^2 λ+e^1-^ζ-1∫_0^s1_{J^i_λ≥^ζ}(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i))^2 λ]
≤ D_1^2^2ζ-1T+2RD_1^2/e^1-^ζ-1_x[∫_0^s|J^i_λ|_d λ],
where we use (<ref>) and the bounds
e^1-J^i_λ/1_{J^i_λ∈(,^ζ)}≤ 1 and e^1-J^i_λ/1_{J^i_λ≥^ζ}≤ e^1-^ζ-1.
Thanks to the Lipschitz continuity of b, we bound the first expectation on the right-hand side of (<ref>) by
4D_1_x[∫_0^s |J_λ|_dλ].
Combining those upper bounds we obtain
_x[L_s^0(J^i)]≤ 4+(4D_1+2RD_1^2/e^1-^ζ-1)_x[∫_0^s|J_λ|_d λ]+D_1^2^2ζ-1T.
We insert this bound into (<ref>) and obtain the following estimate:
_x[|J^i_s|]≤ 4+(5D_1+2RD_1^2/e^1-^ζ-1)_x[∫_0^s|J_λ|_d λ]+D_1^2^2ζ-1T,
for i=1,… d_0.
The coordinates J^i for i=d_0+1,… d are estimated slightly differently. From (<ref>)
|J^i_s|= ∫_0^s(J^i_λ)(b^i(X_λ^γ,R)-b^i(X_λ^R)) λ+∫_0^s(J^i_λ)(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i)) W_λ
+γ∫_0^s(J^i_λ-)n^i_λ- ν^c_λ+L_s^0(J^i_λ)+∑_0≤λ≤ s(|J^i_λ|-|J^i_λ-|),
where ν^c is the continuous part of the process ν.
Notice that
|J^i_λ|= |J^i_λ-+γ n^i_λΔν_λ|≤ |J^i_λ-|+γΔν_λ,
which implies
γ∫_0^s(J^i_λ)n^i_λ ν_λ^c+∑_0≤λ≤ s(|J^i_λ|-|J^i_λ-|)≤γν_s.
Thus, we get from (<ref>) the inequality:
|J^i_s|≤ ∫_0^s(J^i_λ)(b^i(X_λ^γ,R)-b^i(X_λ^R)) λ+∫_0^s(J^i_λ)(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i)) W_λ
+γν_s+L_s^0(J^i_λ).
Since J^i may have jumps, the upper bound on the local time <cit.> does not apply. Additional terms appear as detailed in Lemma <ref> in Appendix. Thus, we obtain
_x[L_s^0(J^i)]≤ 4-2_x[∫_0^s(1_{J^i_λ∈[0,)}+1_{J^i_λ≥}e^1-J^i_λ/)(b^i(X_λ^γ,R)-b^i(X_λ^R)) λ]
-2_x[∫_0^s(1_{J^i_λ∈[0,)}+1_{J^i_λ≥}e^1-J^i_λ/)γ n^i_λ ν_λ^c]
+_x[1/∫_0^s1_{J^i_λ>}e^1-J^i_λ/(σ^i(X_λ^γ,R;i)-σ^i(X_λ^R;i))^2 λ+2γ∑_0≤λ≤ sΔν_λ].
Repeating the same arguments as those we used to obtain (<ref>) and additionally noticing that
|_x[∫_0^s(1_{J^i_λ∈[0,)}+1_{J^i_λ≥}e^1-J^i_λ/)γ n^i_λ ν_λ^c]|+_x[γ∑_0≤λ≤ sΔν_λ]≤γν_s
yields
_x[|J^i_s|]≤ 4+(5D_1+2RD_1^2/e^1-^ζ-1)_x[∫_0^s|J_λ|_d λ]+D_1^2^2ζ-1T +3γ_x[ν_s].
Now, combining (<ref>) and (<ref>) we have
_x[|J_s|_d]≤ ∑_i=1^d_x[|J^i_s|]
≤ 4d+d(5D_1+2RD_1^2/e^1-^ζ-1)_x[∫_0^s|J_λ|_d λ]+d D_1^2^2ζ-1T +3dγ_x[ν_s].
Sending ↓0 and recalling now that J=J^γ,R and ζ∈(1/2,1), we get
_x[|J^ γ,R_s|_d]≤ 5dD_1_x[∫_0^s|J^ γ,R_λ|_d λ] + 3dγ_x[ν_s].
By Gronwall's lemma, there is a constant K_3>0, depending only on d, D_1 and T, such that
_x[|J^ γ,R_s|_d]≤ γ K_3 _x[ν_T-t], for any s∈[0,T-t].
Passing to the limit as R→∞ and using Fatou's lemma, we get
_x[|X_s∧τ^[n,ν],γ-X_s∧τ^[n̅,ν̅]|_d]≤lim inf_R→∞_x[|J^ γ,R_s|_d]≤γ K_3 _x[ν_T-t], for any s∈[0,T-t].
Hence, the proof is completed by setting s=T-t and recalling that τ≤ T-t.
Another lemma of a similar nature allows us to compare the dynamics induced by a generic control (n,ν)∈^d_0_t to its uncontrolled counterpart.
Fix (t,x)∈^d+1_0,T. Let (n,ν)∈^d_0_t and τ∈_t. Then
_x[|X^[n,ν]_τ-X^[e_1,0]_τ|_d]≤ K_3_x[ν_T-t],
with the same constant K_3>0 as in Proposition <ref>.
Similarly as in the proof of Proposition <ref>, we denote X=X^[n,ν] and X^0=X^[e_1,0], and define
τ_Rinf{s≥ 0 | |X_s|_d∨|X^0_s|_d≥ R}.
We denote the two processes (X_t∧τ∧τ_R)_t≥ 0 and (X^0_t∧τ∧τ_R)_t≥ 0 by X^R and X^0,R, respectively. Let J^R X^R-X^0,R and notice that J^R is a càdlàg semimartingale. To further simplify notation we set J=J^R for as long as R is fixed. For each i=1,…,d we denote the i-th coordinate of J by J^i. Now, for i=1,… d_0 repeating verbatim, for γ = 1, the same arguments as in the proof of (<ref>) we obtain
_x[|J^i_s|]≤ 4+(5D_1+2RD_1^2/e^1-^ζ-1)_x[∫_0^s|J_λ|_d λ]+D_1^2^2ζ-1T +3_x[ν_s].
Instead, for i=d_0+1,…, d, the same arguments that yield (<ref>) now give us
_x[|J^i_s|]≤ 4+(5D_1+2RD_1^2/e^1-^ζ-1)_x[∫_0^s|J_λ|_d λ]+D_1^2^2ζ-1T.
Therefore the same conclusions as in Proposition <ref> hold but with γ=1.
Combining the above results with Lemma <ref> we obtain the following corollary.
Fix (t,x)∈^d+1_0,T.
Let (n,ν)∈^d_0,opt_t,x and τ∈_t. Then there is a constant K_4>0 such that
_x[|X_τ^[n,ν]|_d]≤ K_4(1+|x|_d).
It is enough to observe that
_x[|X^[n,ν]_τ|_d]≤_x[|X^[n,ν]_τ-X^[e_1,0]_τ|_d]+_x[|X^[e_1,0]_τ|_d].
The first term above is bounded as in Lemma <ref> whereas standard SDE estimates give
_x[|X^[e_1,0]_τ|_d]≤_x[sup_s∈[0,T-t]|X^[e_1,0]_s|_d]≤ c(1+|x|_d),
for some c>0, thanks to Assumption <ref>. Then, using Lemma <ref> yields the desired result.
§ CONVERGENCE OF THE APPROXIMATING PROBLEMS
In this section we first study the limit as γ→ 0 and then we relax the smoothness assumptions made in Assumption <ref>. We observe that we could alternatively fix γ and relax Assumption <ref> before passing to the limit as γ→ 0. That approach motivates Remark <ref>.
§.§ Limits as γ->0
Throughout this subsection we enforce Assumptions <ref> and <ref>.
The pointwise limit u:=lim_γ→ 0u^γ exists on ^d+1_0,T. Moreover, u coincides with the value of the game with payoff (<ref>), i.e., u=v=v=v, and there exists C>0 such that
|u^γ(t,x)-v(t,x)|≤ C(1+|x|_d)γ^1/2, for all (t,x)∈^d+1_0,T.
Let u^γ be the value of the game described in Theorem <ref>.
We introduce ulim inf_γ→0 u^γ and ulim sup_γ→0 u^γ. We want to prove that
u(t,x)≤v(t,x) and u(t,x)≥v(t,x),
for all (t,x)∈^d+1_0,T, so that u=u=v=v=v as claimed.
Fix (t,x)∈^d+1_0,T. We first prove that u≥v. Let (n,ν)∈^d_t be an η-optimal control for u^γ(t,x), i.e.,
sup_σ∈^γ_t,x(n,ν,σ)≤ u^γ(t,x)+η.
With no loss of generality, thanks to Lemma <ref> we can assume (n,ν)∈^d,opt_t,x. Consider the associated (n̅,ν̅)∈𝒜^d_0_t constructed as in (<ref>). Recall the processes X^[n,ν],γ and X^[n̅,ν̅] as in (<ref>) and (<ref>), respectively. For notational simplicity, denote X^γ = X^[n,ν],γ and X = X^[n̅,ν̅]. Let τ∈_t be an η-optimal stopping time for sup_σ∈_t_t,x (n̅, ν̅, σ), which implies v(t,x) ≤_t,x (n̅, ν̅, τ) + η. We have
u^γ(t,x)-v(t,x)
≥_t,x^γ(n,ν,τ)-_t,x(n̅,ν̅,τ)-2η
=_x[e^-rτ(g(t+τ,X_τ^γ)-g(t+τ,X_τ))+∫_0^τe^-rs(h(t+s,X_s^γ)-h(t+s,X_s)) s
+∫_[0,τ]e^-rsf^γ(t+s) ν_s-∫_[0,τ]e^-rsf(t+s) ν̅_s]-2η
≥_x[e^-rτ(g(t+τ,X_τ^γ)-g(t+τ,X_τ))+∫_0^τe^-rs(h(t+s,X_s^γ)-h(t+s,X_s)) s]-2η
≥-K_x[|X_τ^γ-X_τ|_d]-K_x[∫_0^T-t|X_s^γ-X_s|_d s ]-2η
≥ -K_x[|X_τ^γ-X_τ|_d]-KTsup_s∈[0,T-t]_x[|X_s^γ-X_s|_d] -2η
where K>0 is the same as in (<ref>). The first inequality is by the choice of (n,ν) and τ. The second inequality holds because by the definition of ν̅ in (<ref>) we have ν̅_s(ω)ν_s(ω)=|n_[d_0](s,ω)|_d_0≤ 1 for all (s,ω)∈_+×Ω and f^γ≥ f by (<ref>). The third inequality is by the Lipschitz continuity of g and h, and the final one is by Fubini's theorem. Using Proposition <ref> combined with _x[|ν_T-t|]≤ K_2(1+|x|_d) from Lemma <ref> we have that
u^γ(t,x)-v(t,x)≥ -K(1+T)γ K_2K_3(1+|x|_d)-2η.
Taking the liminf as γ↓0 we get
u(t,x)-v(t,x)≥-2η.
By the arbitrariness of η, we obtain u(t,x)≥v(t,x) as claimed.
We prove now that u(t,x)≤v(t,x). Let τ∈_t be an η-optimal stopping time for u^γ, i.e.,
inf_(n,ν)∈^d_t^γ_t,x(n,ν,τ)≥ u^γ(t,x)-η.
Let (n,ν)∈𝒜^d_0_t be an η-optimal control for
inf_(n̂, ν̂) ∈𝒜^d_0_t_t,x(n̂, ν̂, τ),
which means that v(t,x) ≥_t,x(n, ν, τ) - η. Thanks to Lemma <ref> we can assume without loss of generality that (n,ν)∈^d_0,opt_t,x. Notice that (n,ν)∈𝒜^d_0,opt_t,x⊂^d_t is an admissible control in the game with value u^γ and, moreover,
X_s^[n,ν],γ=X_s^[n,ν] for all s∈[0,T-t], _x-a.s.
Thus, using the above indistinguishability and recalling f^γ≤ f+√(γ)K we easily obtain
u^γ(t,x)-v(t,x)≤ _t,x^γ(n,ν,τ)-_t,x(n,ν,τ)+2η
= _x[∫_[0,τ_γ]e^-rs(f^γ(t+s)-f(t+s)) ν_s]+2η
≤ √(γ) K_x[|ν_T-t|] +2η≤√(γ) K K_2(1+|x|_d)+2η,
where the final inequality is by Lemma <ref>. Taking limsup as γ↓ 0 and thanks to the arbitrariness of η we get u(t,x)≤v(t,x) as claimed.
As a result, the pointwise limit lim_γ→0 u^γ is well-defined and u:=lim_γ→0 u^γ=v=v=v. Combining (<ref>) and (<ref>) for γ∈(0,1) yields (<ref>).
Since |∇ u^γ|_γ≤ f^γ for every γ>0, the next corollary holds.
The value function v is Lipschitz in the first d_0 spatial coordinates with constant bounded by f, i.e., |∇^0 v(t,x)|_d_0≤ f(t) for a.e. (t,x)∈^d+1_0,T.
For (n,ν)∈^d_0_t, we recall stopping times
τ_*=inf{s≥ 0| v(t+s,X_s^[n,ν])-g(t+s,X_s^[n,ν])=0},
σ_*=inf{s≥ 0| v(t+s,X_s-^[n,ν])-g(t+s,X_s-^[n,ν])=0},
and
θ_*=τ_*∧σ_*.
Fix (t,x)∈^d+1_0,T. For any pair (n,ν)∈^d_0_t we have
lim inf_γ↓ 0θ^γ_*≥θ_*, _x-a.s.,
where θ^γ_* is defined in (<ref>).
Fix (t,x)∈^d+1_0,T and take (n,ν)∈^d_0_t. Let
Z_s=(v-g)(t+s,X^[n,ν]_s) and Z^γ_s=(u^γ-g)(t+s,X^[n,ν]_s).
Since (n,ν)∈^d_0_t, we have X^[n, ν]≡ X^[n, ν], γ, so θ^γ_* = inf{s ≥ 0: min(Z^γ_s, Z^γ_s-) = 0 }. Similarly, θ_* = inf{s ≥ 0: min(Z_s, Z_s-) = 0 }.
For ω∈Ω such that θ_*(ω)=0 the claim in the lemma is trivial. Let ω∈Ω be such that θ_*(ω)>0. Take arbitrary δ<θ_*(ω). Then, by the definition of θ_* we have
min(Z_s(ω),Z_s-(ω))>0 for all s∈[0,δ].
Furthermore,
inf_0≤ s≤δmin(Z_s(ω),Z_s-(ω))=:λ_δ,ω > 0,
as the mapping s↦min(Z_s(ω),Z_s-(ω)) is lower semi-continuous so it attains its infimum on [0, δ].
Since (n,ν) is fixed and _x[ν_T-t^2]<∞ by definition of ^d_0_t, for almost every ω there is a compact K_δ,ω⊂^d+1_0,T that contains the trajectories
s↦ (t+s,X^[n,ν]_s(ω)) and s↦ (t+s,X^[n,ν]_s-(ω))
for s∈[0,δ]. Then, uniform convergence of u^γ to v on K_δ,ω (see (<ref>)) yields
lim_γ→ 0sup_0≤ s≤δ(|Z^γ_s(ω)-Z_s(ω)|+|Z^γ_s-(ω)-Z_s-(ω)|)=0.
Hence, for all sufficiently small γ>0, (<ref>) yields
inf_0≤ s≤δmin(Z^γ_s(ω),Z^γ_s-(ω))≥λ_δ,ω/2,
which implies
lim inf_γ↓ 0θ_*^γ(ω)≥δ.
By arbitrariness of δ, we conclude
lim inf_γ↓ 0θ_*^γ(ω)≥θ_*(ω).
The result holds for a.e. ω. That completes the proof of the lemma.
We will extract from the uniform convergence of u^γ to v and from Lemma <ref> the optimality of the stopping time θ_*. This notion of optimality is discussed in detail in Remark <ref>.
Fix (t,x)∈^d+1_0,T. For any (n,ν)∈^d_0_t we have
v(t,x) ≤_t,x(n,ν,θ_*),
where we recall that θ_*=θ_*(n,ν) depends on the control pair. Furthermore,
v(t,x)=inf_(n,ν)∈^d_0_t_t,x(n,ν,θ_*(n,ν)),
hence θ_* is optimal for the stopper in the game with value v.
We follow an approach inspired by <cit.> in optimal stopping.
Notice that (n,ν)∈_t^d_0⊂_t^d and X^[n,ν],γ=X^[n,ν], i.e., the processes are indistinguishable. Since θ_*^γ∧θ_*≤θ^γ_* it is not difficult to verify that (<ref>) continues to hold when we replace the pair (t+θ_*^γ,X_θ_*^γ^[n,ν],γ) therein by (t+θ_*^γ∧θ_*,X_θ_*^γ∧θ_*^[n,ν]). That is, we have
u^γ(t,x)≤_x[ e^-r(θ^γ_*∧θ_*)u^γ(t+θ_*^γ∧θ_*,X_θ_*^γ∧θ_*-^[n,ν])+∫_0^θ_*^γ∧θ_*e^-rsh(t+s,X_s^[n,ν]) s
-∫_0^θ_*^γ∧θ_*e^-rs⟨∇ u^γ(t+s,X_s-^[n,ν]), n_s⟩_γ ν_s^c
-∑_s<θ_*^γ∧θ_*e^-rs∫_0^Δν_s⟨∇ u^γ(t+s,X_s-^[n,ν]+λ n_s),n_s⟩_γ λ].
Further using that |∇ u^γ|_γ≤ f^γ leads to
u^γ(t,x)≤ _x[e^-r(θ^γ_*∧θ_*)u^γ(t+θ_*^γ∧θ_*,X_θ_*^γ∧θ_*-^[n,ν])+∫_0^θ_*^γ∧θ_*e^-rsh(t+s,X_s^[n,ν]) s
+∫_[0,θ_*^γ∧θ_*)e^-rsf^γ(t+s) ν_s]
≤ _x[e^-r(θ^γ_*∧θ_*)v(t+θ_*^γ∧θ_*,X_θ_*^γ∧θ_*-^[n,ν])+∫_0^θ_*^γ∧θ_*e^-rsh(t+s,X_s^[n,ν]) s
+∫_[0,θ_*^γ∧θ_*)e^-rsf^γ(t+s) ν_s]+Cγ^1/2(1+_x[|X^[n,ν]_θ^γ_*∧θ_*|_d]),
where in the second inequality we used (<ref>).
We now let γ↓ 0 and notice that θ^γ_*∧θ_*→θ_* by Lemma <ref>. Since the mappings
s↦ X^[n,ν]_s- and s↦∫_[0,s)e^-ruf(t+u)ν_u
are left-continuous _x-a.s. and θ_*^γ∧θ_* converges to θ_* from below (although not strictly from below), we can conclude that for a.e. ω∈Ω
lim_γ→ 0X^[n,ν]_θ_*^γ∧θ_*-=X^[n,ν]_θ_*- and lim_γ→ 0∫_[0,θ_*^γ∧θ_*)e^-rsf^γ(t+s)ν_s=∫_[0,θ_*)e^-rsf(t+s)ν_s.
Moreover, we can use dominated convergence thanks to, e.g., Corollary <ref> and the identification ^d_0_t,x=^d_0,opt_t,x following Lemma <ref>. That yields
v(t,x)≤_x[e^-rθ_*v(t+θ_*,X_θ_*-^[n,ν])+∫_0^θ_*e^-rsh(t+s,X_s^[n,ν]) s +∫_[0,θ_*)e^-rsf(t+s) ν_s].
We now follow similar arguments as in the proof of Lemma <ref> to show that on the event {σ_*<τ_*} we have
v(t+σ_*,X_σ_*-^[n,ν])=g(t+σ_*,X_σ_*-^[n,ν]), so
v(t+θ_*,X_θ_*-^[n,ν])= v(t+σ_*,X_σ_*-^[n,ν])=g(t+σ_*,X_σ_*-^[n,ν])
= g(t+σ_*,X_σ_*^[n,ν])-∫_0^Δν_σ_*⟨∇^0 g(t+σ_*,X^[n,ν]_σ_*-+λ n_σ_*),n_σ_*⟩λ
≤ g(t+σ_*,X_σ_*^[n,ν])+f(t+σ_*)Δν_σ_*,
where the inequality uses the bound on the gradient ∇^0 g imposed by Assumption <ref>(iii).
On the event {σ_*≥τ_*} we have
v(t+θ_*,X_θ_*-^[n,ν]) = v(t+τ_*,X_τ_*-^[n,ν])
=v(t+τ_*,X_τ_*^[n,ν])-∫_0^Δν_τ_*⟨∇^0 v(t+τ_*,X^[n,ν]_τ_*-+λ n_τ_*),n_τ_*⟩λ
≤ v(t+τ_*,X_τ_*^[n,ν])+f(t+τ_*)Δν_τ_*,
= g(t+τ_*,X_τ_*^[n,ν])+f(t+τ_*)Δν_τ_*,
where the inequality uses the bound on the gradient ∇^0 v from Corollary <ref> and the last equality follows from the definition of τ_* and the right-continuity of the process X^[n,ν].
Combining the upper bounds above with (<ref>) yields
v(t,x)≤_t,x(n,ν,θ_*),
where we emphasise that θ_* depends on the control (n,ν). By arbitrariness of (n,ν)∈^d_0_t we can conclude the proof of the theorem because also v(t,x)≥inf_(n,ν)∈^d_0_t_t,x(n,ν,θ_*) from v = v.
§.§ Relaxing Assumption <ref> into Assumption <ref>
In this section we prove Theorem <ref> via a localisation and mollification procedure, and using the results from the section above.
For technical reasons, we assume first that the functions g and h are uniformly bounded, i.e., g_∞+h_∞<∞, and then we relax this condition in the second part of the proof.
Fix a compact set Σ̂⊂^d and denote Σ = [0, T] ×Σ̂. There is a family (ζ_j)_j∈ = (ζ_j^Σ)_j∈ of mollifiers in ^d+1_0,T and a sequence (c_j)_j∈ of positive numbers converging to 0 such that, denoting g^j:=g*ζ_j, h^j:=h*ζ_j and f^j:=(f+c_j)*ζ_j, we have
g^j-g_C^0(Σ)+h^j-h_C^0(Σ)≤K_Σj for any j∈ and a constant K_Σ > 0,
and
0≤ f^j-f≤c_0j, for any j∈ and a constant c_0>0.
Note that the definition of f^j is with an abuse of notation as f depends only on t: for this mollification we extend f into the spatial dimension as a constant function.
Recall that B_k⊂^d denotes the ball of radius k centred in the origin. Let (ξ_k)_k∈⊂ C^∞_c(^d) be a sequence of cut-off functions such that ξ_k(x)=1 for x∈ B_k and ξ_k(x)=0 for x∉ B_2k. We find it convenient to construct the sequence as follows: let
ξ(z):={[ 1 z≤ 0,; 0 z≥ 1,; exp(1z-1)/[exp(1z-1)+exp(-1z)], z∈(0,1), ].
so that ξ'_∞=2 and let us define ξ_k(x)ξ(|x|_d-kk) for x∈^d. Then
|∇^0ξ_k(x)|^2_d_0≤ |∇ξ_k(x)|_d^2=1k^2|ξ'(|x|_d-kk)|^2≤4k^2.
Now we set g^j,k:=g^jξ_k, h^j,k:=h^jξ_k, and
f^j,k(t) f^j(t)+2kg_∞,
where we construct g^j, h^j and f^j using the above mollification procedure with Σ̂=B_k.
With such choice of f^j,k, recalling that |∇^0g^j|_d_0≤ f^j by (iii) in Assumption <ref> and using the bound in (<ref>), we have
|∇^0g^j,k(t,x)|_d_0^2
=(ξ_k(x))^2|∇^0g^j(t,x)|_d_0^2+2ξ_k(x)g^j(t,x)⟨∇^0 g^j(t,x),∇^0ξ_k(x)⟩+(g^j(t,x))^2|∇^0ξ_k(x)|_d_0^2
≤ (f^j(t))^2+4g_∞f^j(t)/k+4g^2_∞/k^2
=(f^j(t)+2kg_∞)^2=(f^j,k(t))^2.
Fix (t,x)∈^d+1_0,T. For an arbitrary treble [(n,ν),τ]∈^d_0_t×_t we consider the game with expected payoff
^j,k_t,x(n,ν,τ)
= _x[e^-rτg^j,k(t+τ,X_τ^[n,ν])+∫_0^τ e^-rsh^j,k(t+s,X_s^[n,ν]) s +∫_[0,τ] e^-rsf^j,k(t+s) ν_s ].
Since f^j∈ C^∞([0,T]), g^j,k,h^j,k∈ C^∞_c, sp(^d+1_0,T), then Theorems <ref> and <ref> yield that there exists a value v^j,k of the game and an optimal stopping time θ_*^j,k=τ^j,k_*∧σ^j,k_*, with
τ_*^j,kinf{s≥ 0 | v^j,k(t+s,X_s^[n,ν])=g^j,k(t+s,X_s^[n,ν])},
σ_*^j,kinf{s≥ 0 | v^j,k(t+s,X_s-^[n,ν])=g^j,k(t+s,X_s-^[n,ν])}.
Finally, we set
v^∞lim sup_k→∞lim sup_j→∞v^j,k
and proceed to show that v^∞≥v and v^∞≤v.
Let Assumptions <ref> and <ref> hold and assume in addition that g_∞+h_∞<∞. For any (t,x)∈^d+1_0,T we have
v^∞(t,x)=v(t,x)=v(t,x),
hence the value v of the game (<ref>) exists.
We start by proving v^∞≤v. Take θ^j,k_* defined above. Then
v(t,x)≥inf_(n,ν)∈^d_0_t_t,x(n,ν,θ^j,k_*),
as θ^j,k_* is suboptimal for v. For any
η>0 there is a pair (n^j,k,η,ν^j,k,η) such that
inf_(n,ν)∈^d_0_t_t,x(n,ν,θ^j,k_*)≥_t,x(n^j,k,η,ν^j,k,η,θ^j,k_*)-η.
Moreover, from the optimality of θ_*^j,k for v^j,k in the sense of Theorem <ref>, we have
v^j,k(t,x)≤^j,k_t,x(n^j,k,η,ν^j,k,η,θ^j,k_*).
For the ease of notation we simply denote (θ^j,k_*,n^j,k,η,ν^j,k,η)=(θ,n,ν) in what follows. Combining the two bounds above and recalling that g^j,k=g^j and h^j,k=h^j in [0,T]× B_k we obtain
v^j,k(t,x)-v(t,x)
≤^j,k_t,x(n,ν,θ)-_t,x(n,ν,θ)+η
≤_x[|g^j(t+θ,X_θ^[n,ν])-g(t+θ,X_θ^[n,ν])|1_{X_θ^[n,ν]∈ B_k}+2e^-rθg_∞1_{X_θ^[n,ν]∉ B_k}]
+_x[∫_0^θ|h^j,k(t+s,X_s^[n,ν])-h(t+s,X_s^[n,ν])|1_{X_s^[n,ν]∈ B_k} s]
+_x[2h_∞∫_0^θe^-rs1_{X_s^[n,ν]∉ B_k} s+ν_T-t(c_0j+2kg_∞)]+η
≤(1+T)K_B_kj+(c_0j+2kg_∞)_x[ν_T-t]+2g_∞_x(X_θ^[n,ν]∉ B_k)
+2h_∞∫_0^T-t_x(X_s^[n,ν]∉ B_k) s+η,
where for the second inequality we used f^j,k-f≤ c_0/j+2g_∞/k, whereas for the final inequality we used (<ref>).
From the proof of Lemma <ref>, we can restrict our attention to processes (n,ν) ∈^d_0_t,x for which [ν_T-t]≤ c(1+|x|_d), where the constant c can be chosen independently of (j,k). From Corollary <ref> and Markov's inequality we deduce that
_x(X_θ^[n,ν]∉ B_k)≤1/k_x[|X_θ^[n,ν]|_d] ≤K̃_4(1+|x|_d)/k
for some K̃_4 > 0 and the same upper bound also holds for _x(X_s^[n,ν]∉ B_k). Now, letting j→∞ first and then letting k→∞ in (<ref>) we obtain v^∞≤v + η. Finally we send η→ 0.
Next we are going to show that v^∞≥v. Fix an arbitrary η>0. Take (n^j,k,η,ν^j,k,η)∈^d_0_t such that
v^j,k(t,x) ≥sup_τ∈_t^j,k_t,x(n^j,k,η,ν^j,k,η,τ) - η.
Then there is τ^j,k,η such that
v(t,x) ≤sup_τ∈_t_t,x(n^j,k,η,ν^j,k,η, τ) ≤_t,x(n^j,k,η,ν^j,k,η, τ^j,k,η) + η,
where the first inequality follows from suboptimality of (n^j,k,η,ν^j,k,η) for v(t,x). Relabelling
[(n^j,k,η,ν^j,k,η),τ^j,k,η]=[(n,ν),τ],
the above inequalities give the bound
v(t,x)-v^j,k(t,x)≤_t,x(n,ν,τ)-^j,k_t,x(n,ν,τ)+2η.
Similar estimates as in (<ref>) continue to hold, with a simplification that the inequality f^j,k≥ f allows us to drop the second term in the final expression therein. Then, passing to the limit in j and, then, in k we arrive at the desired conclusion.
It is worth noticing that if we introduce
v̂^∞:=lim inf_k→∞lim inf_j→∞ v^j,k,
then we can repeat the same arguments of proof as in Lemma <ref> to show that v̂^∞=v=v=v. Hence,
v=v=v=lim_k→∞lim_j→∞v^j,k.
Moreover, it is clear from the proof (see in particular (<ref>)) that the convergence is uniform on any compact subset of ^d+1_0,T. This fact will be used later to prove convergence of optimal stopping times.
We now want to extend the result above to the case of unbounded g and h. Recalling that g,h≥ 0, we can approximate them with bounded ones by setting g_m=g∧ m and h_m=h∧ m for m∈. Let us denote by v_m the value of the game associated with the functions g_m and h_m, which exists by Lemma <ref>. By construction v_m≤ v_m+1 and we denote the limit
v_∞:=lim_m→∞v_m .
Let Assumptions <ref> and <ref> hold. For any (t,x)∈^d+1_0,T we have
v_∞(t,x)=v(t,x)=v(t,x),
hence the value v of the game (<ref>) exists.
Since h_m≤ h and g_m≤ g, it is immediate to verify that v_∞≤v. It remains to verify that v_∞≥v.
Thanks to (sub)linear growth of g and h there is a sequence (R(m))_m∈ such that R(m)↑∞ as m→∞ and g_m=g and h_m=h on [0,T]× B_R(m). Let us denote by ^m_t,x the expected payoff of the game with the payoff functions g_m and h_m.
For fixed η>0, we can find a pair (n,ν)=(n^m,η,ν^m,η)∈^d_0_t and a stopping time τ=τ^m,η∈_t such that
v(t,x)-v_m(t,x)
≤_t,x(n,ν,τ)-^m_t,x(n,ν,τ)+2η
≤_x[g(t+τ,X_τ^[n,ν])1_{X^[n,ν]_τ∉ B_R(m)}+∫_0^T-te^-rs1_{X^[n,ν]_s∉ B_R(m)}h(t+s,X^[n,ν]_s) s]+2η,
where we obtained the inequality simply by dropping the positive terms h_m and g_m on the events when the process is outside the ball B_R(m).
Thanks to (ii) in Assumption <ref> (i.e., strict sublinear growth of g and h) we can use Hölder inequality to obtain
v(t,x)-v_m(t,x)-2η
≤_x[K_1(1+|X_τ^[n,ν]|^β_d)1_{X^[n,ν]_τ∉ B_R(m)}+∫_0^T-tK_1(1+|X^[n,ν]_s|^β_d)1_{X^[n,ν]_s∉ B_R(m)} s]
≤ K_1{(_x[|X_τ^[n,ν]|_d])^β(_x (X^[n,ν]_τ∉ B_R(m)))^1-β
+∫_0^T-t(_x[|X_s^[n,ν]|_d])^β(_x(X^[n,ν]_s∉ B_R(m)))^1-β s}.
+K_1_x(X^[n,ν]_τ∉ B_R(m))+K_1∫_0^T-t_x(X^[n,ν]_s∉ B_R(m)) s.
For any stopping time σ∈_t, using Markov's inequality and an estimate for _x[|X_σ^[n,ν]|_d] from Corollary <ref> gives
_x(X^[n,ν]_σ∉ B_R(m))≤_x[|X_σ^[n,ν]|_d]/R(m)≤K_4(1+|x|_d)/R(m).
Therefore, letting m→∞ in (<ref>) we find v≤ v_∞+2η and, by arbitrariness of η>0, we conclude the proof.
As in Lemma <ref>, also in the lemma above the convergence of v_m to v is uniform on compact subsets of ^d+1_0,T. This is immediately deduced from (<ref>) and the concluding estimates in the proof.
Since |∇^0 v^j,k|_d_0≤ f^j,k a.e. for every j,k>0 (Corollary <ref>), then |∇^0 v^∞|_d_0≤ f a.e. By the same rationale, also v_∞ satisfies the same bound. That is stated formally in the next corollary.
Under Assumptions <ref> and <ref>, the value function v is Lipschitz in the first d_0 spatial coordinates with constant bounded by f in the sense that |∇^0 v(t,x)|_d_0≤ f(t) for a.e. (t,x)∈^d+1_0,T.
The last result in this section concerns an optimal stopping time for the value v=v_∞. Given (n,ν)∈^d_0_t, set θ_*= τ_*∧σ_* as in (<ref>).
Let Assumptions <ref> and <ref> hold. For any (t,x)∈^d+1_0,T and (n,ν)∈^d_0_t we have
v(t,x) ≤_t,x(n,ν,θ_*),
where we recall that θ_*=θ_*(n,ν) depends on the control pair. Hence
v(t,x)=inf_(n,ν)∈^d_0_t_t,x(n,ν,θ_*(n,ν))
and θ_* is optimal for the stopper in the game with value v.
The proof follows similar arguments as those used in the proof of Thorem <ref>, so we provide only a sketch.
Let g and h be functions that satisfy Assumption <ref>. For m∈, consider their truncations g_m(t,x)= g(t,x)∧ m and h_m(t,x)= h(t,x)∧ m.
Let us further mollify and localise those functions to fit into the setting of Theorem <ref> as in the beginning of Section <ref>, i.e., let us denote
f^j,k_m(t) ((f+c^m,k_j)*ζ^m,k_j)(t)+2mk,
g^j,k_m(t,x) (g_m*ζ^m,k_j)(t,x)ξ_k(x),
h^j,k_m(t,x) (h_m*ζ^m,k_j)(t,x)ξ_k(x),
where (ζ^m,k_j)_j∈ is a sequence of standard mollifiers and c^m,k_j is a sequence of positive numbers so that estimates (<ref>)-(<ref>) hold, and the cut-off functions (ξ_k)_n∈ are obtained with the construction in (<ref>).
Denote by v^j,k_m the value function of the game with the payoff functions f^j,k_m, g^j,k_m and h^j,k_m (c.f. (<ref>)). An optimal stopping time for this game is θ_*^j,k,mτ_*^j,k,m∧σ_*^j,k,m with
τ_*^j,k,minf{s≥ 0 | v^j,k_m(t+s,X_s^[n,ν])=g^j,k_m(t+s,X_s^[n,ν])},
σ_*^j,k,minf{s≥ 0 | v^j,k_m(t+s,X_s-^[n,ν])=g^j,k_m(t+s,X_s-^[n,ν])},
for an arbitrary pair (n,ν)∈^d_0_t.
The same arguments as in the proof of Lemma <ref> and the uniform convergence of v^j,k_m to v on compact subsets of ^d+1_0,T (Lemmas <ref> and <ref>, and Remarks <ref> and <ref>) yield
lim inf_m→∞lim inf_k→∞lim inf_j→∞θ^j,k,m_*≥θ_*, _x-a.s.
Hence,
lim_m→∞lim_k→∞lim_j→∞θ_*^j,k,m∧θ_*= θ_*, _x-a.s.
The functions f^j,k_m, g^j,k_m, h^j,k_m satisfy the assumptions of Theorem <ref>. Then, by the same arguments as in the proof of that theorem, replacing u^γ, θ^γ_*, f^γ, g and h by v^j,k_m, θ_*^j,k,m, f^j,k_m, g^j,k_m and h^j,k_m, respectively, we obtain
v^j,k_m(t,x)≤_x[ e^-r(θ_*^j,k,m∧θ_*)v^j,k_m(t+θ_*^j,k,m∧θ_*,X_θ_*^j,k,m∧θ_*-^[n,ν])
+∫_0^θ_*^j,k,m∧θ_*e^-rsh^j,k_m(t+s,X_s^[n,ν]) s+∫_[0,θ_*^j,k,m∧θ_*)e^-rsf^j,k_m(t+s) ν_s].
We pass to the limit as j→∞, k→∞ and m→∞ (with the limits taken in the stated order). Using (<ref>) and similar arguments as in (<ref>), we have that _x-a.s.
lim_j,k,m→∞X^[n,ν]_θ_*^j,k,m∧θ_*-=X^[n,ν]_θ_*- and lim_j,k,m→∞∫_[0,θ_*^γ∧θ_*)e^-rsf^j,k,m(t+s)ν_s=∫_[0,θ_*)e^-rsf(t+s)ν_s.
We apply dominated convergence theorem to (<ref>) justified by the linear growth of all functions involved and the fact that one can restrict the attention to controls (n,ν)∈^d_0_t such that _x[|ν_T-t|] ≤ c (1 + |x|_d) / inf_m,j,k f^j,k_m(T) ≤c̃ (1 + |x|_d) for some c̃ < ∞ (c.f. Lemma <ref> and arguments leading to (<ref>)). In the limit we obtain
v(t,x)≤ _x[e^-rθ_*v(t+θ_*,X_θ_*-^[n,ν])+∫_0^θ_*e^-rsh(t+s,X_s^[n,ν]) s +∫_[0,θ_*)e^-rsf(t+s) ν_s].
Corollary <ref> and the same ideas as in (<ref>) and (<ref>) yield the estimate
v(t,x)≤_t,x(n,ν,θ_*(n,ν)),
which concludes the proof.
We combine results from this section to prove Theorem <ref>.
Lemma <ref> shows that the game with expected payoff (<ref>) admits a value function v. The optimality of the stopping time θ_* is asserted in Lemma <ref>. The continuity of the value function v follows from the continuity of v^j,k_m in the proof of Lemma <ref> (from Theorem <ref>) and the uniform convergence of v^j,k_m to v on compact sets, see arguments in the proof of the aforementioned lemma or directly Remarks <ref> and <ref>). Corollary <ref> implies the Lipschitz continuity of v in the first d_0 spatial coordinates in the sense required in the statement of the theorem. Finally, the growth condition is easily deduced from (ii) in Assumption <ref> and the uniform bound from Corollary <ref>.
§
§.§ Proof of Theorem <ref>
This proof repeats almost verbatim the one from <cit.>. Here we only summarise the main steps and highlight a few minor changes.
The game considered here satisfies <cit.> with the only exception that the dynamics of X^[n,ν],γ in (<ref>) has a weight γ in the last d_1 coordinates of the control process (n_t)_t≥ 0. Such weight does not appear in the controlled state-dynamics in <cit.> but we will show that it only induces minor changes to the proof.
Following the methodology in <cit.> we study the penalised problem
∂_t u+ u-ru=-h-1δ(g-u)^++ψ_(|∇ u|_γ^2-(f^γ)^2), on [0,T)×^d,
with terminal condition u(T,x)=g(T,x) and growth condition |u(t,x)|≤ c(1+|x|_d) (actually it is enough to consider bounded u; see Remark <ref>). In <cit.> the penalised problem is first solved on bounded domains, via a localisation procedure, and then on unbounded domain by passing to the limit in the size of the bounded domains. Notice that the parameter γ leads to a different penalisation term in (<ref>) compared to <cit.>. Indeed, in <cit.> the penalisation reads ψ_(|∇ u|^2_d-(f^γ)^2), whereas here we use the γ-norm |·|_γ of the gradient ∇ u.
By Assumption <ref>(i), for all sufficiently large m∈ we have g,h≡ 0 on [0,T]×(^d∖ B_m) and therefore the localisation performed in <cit.> is superfluous. In particular, we do not need here to introduce functions f_m, g_m, h_m from <cit.>. The penalised problem on a bounded domain [0, T] × B_m is given by
{[ ∂_t u+ u-ru=-h-1δ(g-u)^++ψ_(|∇ u|_γ^2-(f^γ)^2), on [0,T)× B_m,; u(t,x)=0, for (t,x)∈[0,T)×∂ B_m,; u(T,x)=g(T,x), for x∈ B_m. ].
For the existence of a solution u^,δ;γ_m to (<ref>) the key gradient bounds in <cit.> can be recovered in our set-up with minor adjustments. In particular, <cit.> holds by replacing <cit.> with
-2⟨∇ w^n,∇(|∇ u^n|_γ-(f^γ)^2)⟩≤ 2λ |∇ u^n|^2_γ+R̃_n,
where w^n and u^n are two proxies of u^,δ;γ_m defined in <cit.> (with w^n,u^n→ u^,δ;γ_m for n→∞), λ is an arbitrary constant and R̃_n is a remainder that vanishes when n→ 0.
Since |∇ u^n|^2_γ≥γ|∇ u^n|_d^2 then, using (<ref>), <cit.> becomes
0≤ (C_1-λγ)|∇ u|^2_d +C_2 +λ r M_1+R_n+R̃_n,
where C_1, C_2, M_1 are the same constants as in the original paper and R_n is another vanishing remainder.
The rest of the proof is the same, up to choosing λ=γ^-1(C_1+1).
The proof of Proposition <cit.>, which gives another gradient bound for u^,δ;γ_m (uniformly in m) requires analogous changes. In particular, <cit.> becomes
ξ⟨∇ w^n,∇(|∇ u^n|^2_γ-(f^γ)^2)⟩≥ λ|∇ u|^2_γ-|∇ u|^3_d|∇ξ|_d-ξR̂_n≥λγ|∇ u|_d^2-|∇ u|^3_d|∇ξ|_d-ξR̂_n,
where ξ is a cut-off function (<ref>) supported on B_m_0 for fixed m_0<m, and R̂_n is a vanishing remainder. The rest of the proof continues as in the original paper. Care is only needed to replace λ̅ in <cit.> with γ^-1λ̅.
Thanks to the gradient bounds, and following the arguments from <cit.>, we show the existence and uniquenss of solution to (<ref>). Then, letting m→∞, we obtain the unique solution u^,δ;γ of (<ref>); for this convergence we need a bound on the penalisation term ψ_(|∇ u^,δ;γ|^2_γ-(f^γ)^2). For that we argue as in <cit.>. Its proof still holds upon noticing that the last term of <cit.> now reads
1/2ξψ'_(ζ̅_n)∑_i,j=1^da_i,j(2⟨∇^0 ∂_x_iw^n,∇^0 ∂_x_jw^n⟩+2γ⟨∇^1 ∂_x_iw^n,∇^1 ∂_x_jw^n⟩)
where ζ̅_n is the same as in <cit.> and w_n is a proxy for u^,δ;γ.
The sum can be bounded from below by
∑_i,j=1^da_ij(2⟨∇^0 ∂_x_iw^n,∇^0∂_x_j w^n⟩+2γ⟨∇^1 ∂_x_iw^n,∇^1 ∂_x_jw^n⟩)
=2∑_k=1^d_0∑_i,j=1^da_ij (∂_x_ix_kw^n)( ∂_x_jx_kw^n)+2γ∑_k=d_0+1^d∑_i,j=1^da_ij (∂_x_ix_kw^n)( ∂_x_jx_kw^n)
≥ 2∑_k=1^d_0|∇ (∂_x_kw^n)|_d^2+2γ∑_k=d_0+1^d|∇ (∂_x_kw^n)|_d^2≥ 2γ |D^2 w^n|_d× d^2.
Moreover, the first equation in <cit.> becomes
⟨ a∇ξ, ∇ (^γ(∇ w^n)-(f^γ)^2) ⟩≥ -ξθγ/4|D^2w^n|_d× d^2-16/θγa̅_m^2 d^4 C_0|∇ w^n|_d^2
and the first equation in <cit.> becomes
∑_k=1^d ∂_x_k w^n ℒ_x_k w^n≤θγ8|D^2 w^n|^2_d× d+C_1/γ(N_1+1)^2,
where _x_k is the second order differential operator defined as
(_x_kφ)(x)=1/2tr(∂_x_ka(x)D^2φ(x))+⟨∂_x_kb(x),∇φ(x)⟩, for φ∈ C^∞(^d).
The constants C_0,C_1,N_1,a̅_m are independent of and δ and they are defined in <cit.>.
The rest of the proof continues as in the original paper.
From a probabilistic perspective, the solutions of the penalised problems admit representations in terms of 2-player, zero-sum stochastic differential games. Those games depend on a Hamiltonian function <cit.>, which in our setting reads
H^,γ(t,x,y)sup_p∈^d{⟨ y,p⟩_γ-ψ_(|p|^2_γ-(f^γ(t))^2)}.
For the problems on bounded domains the results in <cit.> continue to hold. The only observation we need is that the first-order condition for H^,γ is the same as the one for H^_m used in the proof of <cit.>, i.e.,
y_i =ψ_ε'(|p|^2_γ- (f^γ(t))^2)2p_i for i=1,…, d_0,
γ y_i =ψ_ε'(|p|^2_γ- (f^γ(t))^2)2γ p_i for i=d_0+1,… d.
In vector notation, we have y=ψ_ε'(|p|^2_γ- (f^γ(t))^2)2p, which is precisely the same as in the paragraph above <cit.>. Similarly, <cit.> continue to hold by the same argument.
Another very small tweak affects <cit.> when H^ is replaced by H^,γ. Taking p=( /2)y in (<ref>) and using ψ_(z) ≤ z / yields
H^ε,γ(t,x,y)≥ 2|y|_γ^2-ψ_(|2y|^2_γ-(f^γ(t))^2)≥2|y|_γ^2-ψ_(|2y|^2_γ)
≥γ4|y|_d^2.
Thus <cit.> holds with replaced by γ.
All the remaining arguments from <cit.> remain unaltered. □
§
We give below an extension of the result in <cit.>.
Let X be a real valued càdlàg semimartingale with jumps of bounded variation and let L_t^0(X) be its local time at 0 in the time-interval [0,t]. Then, for any ∈(0,1) we have
[L_t^0(X)] ≤ 4 -2[∫_0^t(1_{X_s∈[0,)}+1_{X_s≥}e^1-X_s/) X_s^c]
+1/[∫_0^t1_{X_s>}e^1-X_s/ ⟨ X⟩_s^c]
+ 2 [∑_0≤ s≤ t|Δ X_s|],
where X_s^c and ⟨ X⟩_s^c are the continuous parts of X and of the quadratic variation of X, respectively, and Δ X_s X_s-X_s-.
For ∈(0,1), let
g_(y)=0·1_{y<0}+y1_{0≤ y<}+(2-e^1-y/)1_{y≥}, for y∈.
Following arguments from <cit.> we have that g_∈ C^1(∖{0}), it is semi-concave, i.e., y↦ g_(y)-y^2 is concave. Moreover, g_ is such that
0≤ g_(y)≤ 2, for y∈;
g'_(y)=1_{0≤ y≤}+e^1-y/1_{y≥}, for y∈;
g”_(y)=0, for y∈(-∞,0)∪(0,);
g”_(y)=-^-1e^1-y/, for y>.
Applying <cit.> to g_(X_t) we get
g_(X_t)-g_(X_0)= ∫_0^t g_'(X_s-) X_s^c+1/2∫_0^t g_”(X_s)1_{X_s≠0}∩{X_s≠} ⟨ X⟩_s^c
+1/2 L_t^0(X)+∑_0≤ s≤ t(g_(X_s)-g_(X_s-)).
Rearranging terms and multiplying by 2, using |g_(X_s)-g_(X_s-)|≤ |X_s-X_s-|=|Δ X_s| by Lipschitz property of g_ and that X has jumps of finite variation, we get
L_t^0(X)≤ 4-2∫_0^t g_'(X_s-) X_s^c-∫_0^t g_”(X_s)1_{X_s≠0}∩{X_s≠} ⟨ X⟩_s^c+2∑_0≤ s≤ t|Δ X_s|.
Using the properties of g_ listed above and applying expectation we obtain the desired result.
plain
|
http://arxiv.org/abs/2306.08318v1
|
20230614073801
|
Identification of Energy Management Configuration Concepts from a Set of Pareto-optimal Solutions
|
[
"Felix Lanfermann",
"Qiqi Liu",
"Yaochu Jin",
"Sebastian Schmitt"
] |
cs.LG
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] |
1]Felix Lanfermanncor1
[email protected]
2]Qiqi Liu
[email protected]
3]Yaochu Jin
[email protected]
1]Sebastian Schmitt
[email protected]
[cor1]Corresponding author
[1]
organization=Honda Research Institute Europe GmbH,
addressline=Carl-Legien-Strasse 30,
postcode=D-63073,
city=Offenbach/Main,
country=Germany
[2]
organization=Department of Artificial Intelligence, Hebei University of Technology,
addressline=Beichen,
city=Tianjin,
country=China
[3]
organization=Faculty of Technology, Bielefeld University,
addressline=Inspiration 1,
postcode=D-33619,
city=Bielefeld,
country=Germany
Optimizing building configurations for an efficient use of energy is increasingly receiving attention by current research and several methods have been developed to address this task.
Selecting a suitable configuration based on multiple conflicting objectives, such as initial investment cost, recurring cost, robustness with respect to uncertainty of grid operation is, however, a difficult multi-criteria decision making problem.
Concept identification can facilitate a decision maker by sorting configuration options into semantically meaningful groups (concepts), further introducing constraints to meet trade-off expectations for a selection of objectives.
In this study, for a set of 20000 Pareto-optimal building energy management configurations, resulting from a many-objective evolutionary optimization, multiple concept identification iterations are conducted to provide a basis for making an informed investment decision.
In a series of subsequent analysis steps, it is shown how the choice of description spaces, i.e., the partitioning of the features into sets for which consistent and non-overlapping concepts are required, impacts the type of information that can be extracted and that different setups of description spaces illuminate several different aspects of the configuration data–an important aspect that has not been addressed in previous work.
Energy ManagementConfiguration ConceptsConcept IdentificationClusteringMulti-criteria Decision Making
§ INTRODUCTION
An energy management configuration task is complicated
Using fossil resources in an efficient way and reducing the consumption of energy to combat global warming has become ever more important.
This mandates an effective management of energy consumers and producers in building facilities, especially for larger industrial facilities, and has attracted attention towards an elaborate investigation of possible configuration options.
This includes not only site selection <cit.>, but also modelling and optimizing building configurations <cit.>, and developing strategies for optimal operation <cit.>.
Configuration options for buildings and industrial facilities include, for example, investing into renewable energy production systems like photo voltaic (PV) systems to reduce energy cost and greenhouse gas emissions.
Also, a well-managed battery energy storage system can be used to counteract high power demand and mitigate additional stress on the electricity grid and corresponding fees from the energy supplier.
Further, a combined heat and power plant serves the thermal and electric needs and increases independence from the electricity grid, which is in particular important in situations with unstable infrastructure.
Plenty of other options and combinations of appliances and infrastructure systems can be considered.
To choose a reasonable configuration, a multitude of factors, such as investment and regularly recurring cost, emissions, lifetime and profitability of equipment, resilience towards unexpected contingencies, have to be considered.
Unfortunately, in this multi-criteria decision making problem <cit.> many of these configuration objectives are generally conflicting and often hard to balance.
Given all these different options and possible choices to make, finding an appropriate configuration for an industrial facility which respects some explicitly or implicitly given preferences poses a serious problem to the decision maker <cit.>.
Technically, it can be formulated as a search problem where many different configurations are tested and evaluated iteratively.
Such a multi-criteria decision making problem <cit.> can be impacted by incorporating user-preference in an a priori, interactive, or posterior way, based on whether the decision makers have clear expectations about the solution to be obtained <cit.>.
For example, one can predefine a reference point or reference vector to represent the preference before the search process in order to guide the search to the specified direction.
The interactive way usually relies on frequent interactions with decision makers in order to adjust the search direction during the search process.
In many cases, however, the decision makers have no knowledge about the trade-offs between the different configurations, i.e., the true Pareto fronts, so it is difficult to clearly express preferences.
Thus, traditional multi-objective evolutionary algorithms such as <cit.> are usually designed based on the assumption that the search should be conducted in the whole objective space, and then decision makers are encouraged to select desired solutions from the obtained solution set in a posterior way.
The solution set produced by the optimization process can be easily visualized for two- or three-objective optimization problems for helping decision makers make informed decisions, while for problems with more than three objectives, i.e., many-objective problems, the visualization is very challenging <cit.>.
The identification of concepts has many advantages.
Regarding the posterior approach, structuring the configuration process requires analyzing and evaluating multiple different candidate solutions.
To reveal general proximity relations among solutions and to create valuable insight for the decision making process, the candidate solutions can, for example, be allocated to groups, and for each group a knee point <cit.> or knee-region <cit.> based on the objective values may be selected, either for the original high-dimensional Pareto front, or a lower dimensional projection thereof.
Moreover, considering the fact that the number of non-dominated solutions for many-objective problems is usually very large and it is very challenging for decision makers to make decisions based on the obtained candidate solutions, it is beneficial to group similar designs or configurations into concepts <cit.>.
A concept incorporates different candidate solutions that share similar characteristics, typically in terms of their specification (design parameters), but also in terms of other features, such as operation mode and performance criteria (objective values).
A definition of the key terms used in this work can be obtained from Table <ref>.
For the identification of reasonable concepts, it is necessary that identified groups of designs are similar with respect to all describing features.
In the current setting it is also important that this similarity is preserved when considering only subsets of features.
For example, a reasonable requirement is that solutions from one concept are highly similar when considering only some performance criteria (e.g., resilience, emissions and lifetime) but are also similar with regard to cost features (e.g., annual cost and investment cost).
In that way, the concept identification process not only allows us to gain technical insight into the engineering task but also maps the available trade-offs in the configuration for the decision maker.
It also allows for the selection of representative instances in the form of archetypal solutions, which can be further utilized in, e.g., additional refinement steps or subsequent optimization studies under changed boundary conditions <cit.>.
Concept identification may be viewed as a special form of clustering technique.
The concept identification method employed in this work <cit.> may be viewed as a special form of clustering technique, that differs from existing methods as it aims at uncovering clusters of samples which are non-overlapping and consistent with respect to multiple description spaces of the joint feature space <cit.>.
Such a description space comprises of a subset of features that characterizes the instances in one aspect <cit.>.
Consistency is given for a cluster, if the instances that are associated with the cluster are assigned to the same cluster in arbitrary, a priori defined description spaces.
Concepts are then defined as the non-overlapping, consistent clusters of solutions in the set of all solutions.
The choice of description spaces has a large influence on the potential concepts
The developed methodology for concept identification in multiple description spaces <cit.> is able to steer the identification process towards a consistent and meaningful distribution of concepts.
However, the a priori determination of suitable description spaces has not yet been addressed in previous work, although it has a significant influence on the outcome.
Previous studies <cit.> show that allocating features into different description spaces introduces a correlation of those features for the identifiable concepts, while having features in the same space allows for arbitrary combinations of those.
Depending on the requirements of a given identification task, both those properties can be either beneficial or unwanted.
In any case, only a thorough selection and partitioning of features into description spaces can lead to plausible and useful concepts with respect to the requirements and provide helpful insights for a decision maker.
In this paper, we identify energy management concepts and highlight the importance of the choice of description spaces?
In previous work, concept identification techniques have been successfully applied in various engineering design domains <cit.>.
The present work applies the concept identification approach to a complex data set from the energy management domain.
The approach provides valuable insights and guides the decision maker towards finding suitable options for efficient energy management configurations which can be aligned with the decision maker's expectations and constraints.
In that context, the impact of the choice of description spaces on the identified concepts is thoroughly investigated and the effect of allocating the features into separate description spaces or the same is discussed for various options.
Structure of the paper.
The remainder of this paper is structured as follows:
Section <ref> provides an overview of the building energy management configuration problem, a description of the applied concept identification method, as well as a discussion on the importance of the choice of description spaces.
Section <ref> illustrates the results of the set of experiments where concept identification is applied to a large dataset containing several thousand solutions for the described energy management configuration problem.
A first experiment demonstrates the impact that the choice of description spaces has on the potentially identifiable concepts.
In a second experiment, a sensible combination of description spaces based on the given features is chosen to identify concepts of technically feasible and economically reasonable configurations.
A third experiment finalizes the selection process by identifying concepts within the remaining set of solutions from a previous concept and analyzing and evaluating the groups.
Section <ref> discusses the findings and improvement potential before Section <ref> concludes the work and offers an outlook on future work.
§ MATERIALS AND METHODS
§.§ Description of the energy management configuration problem
The data set was created using surrogate-assisted many-objective optimization
The data set under investigation was created in the context of many-objective evolutionary optimization algorithms <cit.>.
There, the target was to find a large set of Pareto-optimal configurations by changing the parameters of power supply components, such as PV system, battery storage, and heat storage.
Each configuration is defined by nine different parameters which are listed in the configuration Table <ref>.
For each configuration a set of performance values, i.e., objectives, were evaluated using a Digital Twin simulation model of an existing research campus building.
The simulations were performed with the commercial tool SimulationX[https://www.esi-group.com/products/system-simulation], which is based on the Modelica simulation language[https://modelica.org/modelicalanguage.html].
From the simulation results, a set of ten different partially related and generally conflicting objectives are calculated, which are relevant to produce an informed investment decision.
These objectives are listed in Table <ref>.
The objective Resilience is defined as the duration the company would be able to operate in case no grid power is available, thus if energy is only provided by the local production PV system, combined heat and power (CHP) and from battery storage.
Medium SOC time share refers to the inverted time ratio in which the battery state of charge (SOC) of the battery resides between 30 and 70, which estimates aging of the battery.
In <cit.>, state-of-the-art surrogate-assisted evolutionary algorithms such as
RVMM <cit.>, K-RVEA <cit.>, REMO <cit.>,
are utilized to produce a large data set of high quality in handling the many-objective energy management problem.
From the entirety of all feasible solutions that these algorithms produce, all dominated solutions are discarded and only
20000 non-dominated Pareto-optimal configurations are used for further analysis in this work.
More details on the parameters, objectives and the multi-objective optimization approaches can be obtained from <cit.> while the simulation approach itself is discussed in <cit.>.
§.§ Introduction to concept identification
Concept identification delivers groups of samples that are similar in multiple description spaces.
Concept identification is an unsupervised method which provides sets of samples that are similar with respect to multiple sets of features <cit.>.
Short comparison to other methods
It differs from clustering algorithms,
such as subspace clustering <cit.>, multi-view clustering <cit.>, co-clustering <cit.>, biclustering <cit.>, two-mode clustering <cit.>, direct clustering <cit.>, and block clustering <cit.>, by preserving the similarity between samples of the same concept in each description space, i.e., when observing the a priori defined subsets of features in isolation <cit.>.
§.§ The importance of the choice of description spaces
Each description space introduces a constraint into the process and influences what type of concepts can be identified
The choice of description spaces impacts the concepts which are identified.
The following examples show that each description space introduces a constraint into the process and influences what type of concepts can be identified.
It will become clear that the more description spaces are used, the more restricted the potential concepts are.
Allocating features into different description spaces introduces a correlation requirement of those features for the identifiable concepts.
In an example data set, different description spaces lead to different (potential) concepts (Fig. <ref>).
In order to show the influence of the choice of description spaces, we consider a data set containing data samples that each can be described by three features (investement cost, yearly total cost and resilience) and we illustrate the possible four choices in the following, as shown in Fig. <ref>.
* If only one feature is considered as the only description space, the potential concepts will be discriminated based on the one single feature alone.
If, for example, two concepts were to be identified, the three-dimensional space would be divided into two boxes that each could host one concept (Fig. <ref> (a)).
Samples can then be assigned to one of the two different concepts based on the value of the one chosen feature alone.
* If, however, two features are chosen as separate description spaces, the potential concepts would be divided based on both of these features separately.
A concept identification process that aims at identifying two concepts would therefore divide the three-dimensional space into two non-neighboring boxes alongside the chosen features (purple and yellow boxes in Fig. <ref> (b)), or analogously the empty space).
The two concepts would then be located in those boxes.
The requirement that the identified concepts must not overlap even when only one description space is viewed, i.e., when the data is projected onto one of the chosen feature axis in this example, leads to this restriction to non-adjacent regions.
* If a combination of two features were chosen to span one description space, the potential concepts will be non-overlapping regions in the two-dimensional plane spanned by those two features.
The concept regions defined in the two-dimensional description space are then extruded in the dimension of the remaining feature.
Choosing the shape of the concept regions as ellipses in the description space, the two regions available to the concepts are given as two cylindrical volumes (Fig. <ref> (c)). (Other arbitrary shapes can be applicable for the concept regions, depending on the preference of the decision maker.)
Comparing this choice with one description space containing both two features to the previous setup (as in Fig. <ref> (b)) where both features are put into separate description spaces elucidates one core aspect of this concept identification approach, that is, concepts are non-overlapping even if several features are projected onto a single description space.
Fig. <ref> (b) implies that each feature value can be separately used to uniquely identify a concept.
In contrast, in Fig. <ref> (c), both feature values are necessary to uniquely identify a concept.
In general, the concepts will overlap if they are projected onto only one feature (resilience in the example).
* If two separate description spaces were chosen, one containing two features and the other one feature, the potential concept regions for two concepts partition the full three dimensional feature space into four regions.
Assuming the same ellipsoidal concept regions in the two-dimensional description space as before, the extruded cylindrical volumes would be separated into two disjoint parts along the one-dimensional description space.
The two concepts could then be identified in two non-neighboring volumes (yellow and purple in Fig. <ref> (d)).
Other locations of the concepts are forbidden due to the requirement of non-overlapping concepts in the projections into each description space.
It should be noted that the above described regions only characterize the possible locations for concepts, and data samples located in one of these regions are not automatically associated with a concept.
For complex data samples, there are many data samples inside these regions that are not associated with any concept.
Also, as indicated in the examples above, the description spaces used for identifying the concepts do not need to include all features.
Choosing sensible description spaces is not intuitive.
These illustrative example sketches should make it clear that the choice of description spaces has a large influence on the identifiable concepts.
However, a reasonable selection of description spaces is often not intuitive and a difficult task on its own. It is believed that the choice of description spaces should align with the preferences of the decision maker in order to provide the most useful concepts.
The user has to choose which features should be considered, which should be in different and which should be in the same space.
The decision maker has to decide whether each feature should be considered in the concept identification approach, and if so, in which description space it should be included.
Two essential insights from the previous discussion can be gained:
(i) Features in the same description space can be arbitrarily combined in one concept and every possible combination of feature values can also be represented by a separate concept.
(ii) Features in different description spaces lead to stronger correlations in the feature values for concepts.
Because the feature values in one description space condition the feature values in another description space, only a subset of all possible combinations of feature values can be represented in concepts.
So, for example, if a set of samples needs to be uniquely identifiable on the basis of one feature value, this feature needs to be considered as a separate one-dimensional description space.
Similarly, if a set of samples needs to be uniquely identifiable based on the combination of multiple features, all these features need to be considered in one description space.
On the other hand, if features are assigned to different description spaces, the feature values in one description space impose a condition on the feature values in the other description spaces.
Of course, how strong this conditioning and the induced correlations will be, strongly depends on the structure of the data set.
But in any case, the resulting concepts can only represent subsets of combinations of these feature values.
§ RESULTS
We need to identify concepts in the data.
The analysis of the trade-off relations between objectives becomes more difficult with the increase of the number of objectives. Since the optimization of the building energy management involves ten objectives, it is very challenging to assess the trade-off relations between all those objectives and arrive at an informed decision for selecting the most appropriate configurations given some specific design goals.
Therefore, it is highly desirable to provide a reasonable selection approach to a decision maker to allow for an educated investment decision for the ten-objective building energy management problem.
Possible solution candidates need to fulfill certain economic requirements, and span the trade-off options along relevant criteria which are predefined by the decision maker.
A series of experiments was conducted.
Based on the 20000 Pareto-optimal configuration options, a series of experiments is conducted to illustrate how energy management configuration concepts can be identified from the present data set in a meaningful way.
In the first experiment, only three major objectives related to cost end resilience are considered in the identification process to highlight the universal importance of the choice of description spaces.
In the second experiment, two parameters and six objectives are selected and split into description spaces to identify technically feasible and useful concepts.
In the third experiment, the samples from one of the concepts in the second experiment are re-analyzed for a further in-depth assessment of the concept options based on the three description spaces obtained in the first experiment.
We optimize the concepts
The concept identification process is conducted by first specifying the number of concepts to be identified.
Each potential concept region is given by a parameterized geometric shape, where are used.
The parameters of the concept regions are obtained by optimizing a metric characterizing the quality of the resulting concepts using an evolutionary optimization approach (We use CMA-ES<cit.> in this work).
The optimization is run for 1000 generations with a population of λ=20 candidate solutions.
The details of the concept identification approach are described in <cit.>.
§.§ Experiment 1: Concept identification based on three main objectives
We did concept identification experiments with the data set, based on 3 objectives
An investment decision for an energy management building configuration has to be made, given multiple major external requirements and constraints.
Key aspects are initial investment cost, the annually recurring cost for maintenance, and resilience as measures for the effectiveness of the installations (see Section <ref>).
A reasonable request to the engineer would be to deliver configuration concepts that are distinguishable in terms of investment cost, while also providing trade-off options between maintenance cost and resilience.
That means the identified concepts should be ranked based on their investment cost alone.
We split three objectives into two spaces
This is achieved by splitting the corresponding objectives into two separate description spaces, the first consisting of the investment cost only, and the second comprising yearly cost and resilience together.
We analyze the results
The target for the concept identification process is to identify three concepts in these two description spaces.
As required, the three identified concepts represent high, medium, and low investment cost solutions, as can be seen by the purple, yellow, and green concepts in the left panel of Fig. <ref> (a).
In the second description space (right panel of Fig. <ref> (a)), the three concepts encompass trade-off options between yearly cost and resilience values.
The three concepts fully meet the requirement for the decision process.
The concept that includes the configurations with the highest investment cost (purple) indeed includes the configurations with the best trade-off between the largest resilience and the lowest annual cost.
On the other hand, the concept encompassing the lowest investment cost configurations (green) leads to the overall largest annual cost with rather low resilience.
A typical situation of a complex real-world data set can be observed as well, since many samples are not associated with any concept at all (grey samples).
This is prominently visible in the second description space (right panel) as many samples with larger resilience, which are located above the low investment cost concept (green samples), are not associated with any concept.
This is a direct consequence of choosing the investment cost as a separate description space, as these solutions have too large investment cost and would be overlapping with the yellow concept in this description space.
We show that the choice of description spaces has a big impact on the potential concepts
To illustrate how the choice of the description spaces affects the concepts, a different setup is considered in the following.
The first description space is given by the annual cost alone, while the second space is spanned by investment cost and resilience.
The economic reason for such a choice is different, as it puts most emphasis on the annual cost, along which all concepts should be ranked, while the trade-off between the other two objectives is considered.
In Fig. <ref> (b), the concepts identified for the previous choice of description spaces are plotted in the novel setup.
It can be observed that in the new projection, the previously identified concepts do not meet the requirements of the decision maker.
When we project the results from the first concept identification experiments into the second DS combination, we see overlap in the space of investment cost
The large visible overlap between all three groups in the first description space prohibits a unique association of configurations to concepts based on annual cost alone.
Also, the second description space clearly shows the separation along the investment cost (as imposed by the previous concept identification process), but does not provide trade-off options regarding investment cost and resilience to the decision maker.
We evaluate all combinations of features (Description Spaces)
However, conducting the concept identification process within this new set of description spaces gives the desired results (Fig. <ref> (c)).
The process delivers three unique concepts, separable with respect to the isolated feature of yearly total cost, while at the same time providing trade-off options for the joint space of investment cost and resilience.
The first experiment demonstrates that the developed concept identification approach can be used to define meaningful concepts in complex data sets and provides valuable insights as basis for an informed decision.
The freedom to choose the partitioning of the full feature space into description spaces allows to meet the requirements of the decision maker, since the potentially identifiable concepts are significantly impacted by the definition of the description spaces.
§.§ Experiment 2: Concept identification based on nine parameters and six objectives
The previous example only considers the three major objectives related to cost and resilience, without taking the other features into account.
To make a well-informed decision, more aspects of the configurations, and in particular some technical aspects need to be considered as well.
To assure the technical feasibility and economic sensibility of selected samples, two parameter values and six objectives are involved in making a concept identification, and they are split into four distinct description spaces, as presented in Table <ref>.
We describe why we choose this combination of the description spaces
The two parameters, i.e., C_b and P_PV, are chosen to be represented in the same description space (Description space 1 in Table <ref>), as any combination of these values can make sense and can create a valid configuration.
The six objectives are separated into three description spaces (Description space 2 to 4 in Table <ref>), each of which considers only two objectives.
This choice is motivated by the following insights.
The cost variables C_invest and C_annual are placed into two separate description spaces to allow for distinction between concepts based on either of these two values.
This helps to avoid uninformative concepts where, for example, the investment cost is high but the annual cost ranges across all possible values.
The objectives b̅ (mean battery state of charge) and E_f (energy fed into the grid) are represented in the same description space, as combinations of these two values naturally map to operating modes such as cost reduction via peak-shaving (low E_f and large b̅) or cost reduction via PV utilization (large E_f).
Similar to the cost objectives, P_p and E_d are separated to avoid unwanted concepts with high grid supply power peaks and large amounts of discharged energy.
This should make sure that if an expensive battery energy storage system is implemented, it will be used for power peak shaving.
E_d, b̅, and C_invest are also separated to avoid unwanted concepts that represent configuration with large energy capacity (high C_invest), that is not used efficiently (high b̅ and low E_d).
It should be noted that these choices of description spaces are not mandatory to achieve the stated goals.
They should rather serve as guidelines, and especially for complex situations where many features need to be included, some choices might need to be reconsidered after inspecting various outcomes of the concept identification process.
We describe which preference samples are chosen
For this example, the concept identification approach also includes 30 solutions of particular interest in the process as user-preference (Fig. <ref>).
These are specified by the decision maker a priori and the concept identification process then generates concepts which should include these samples.
Thus, these user-preference samples are a means to anchor the concepts in various regions of interest.
For the current example these are:
* Ten samples with very low investment cost (denoted by green dots).
* Ten samples with good trade-offs between low investment cost and a low power peak (denoted by orange dots).
* Ten samples with low annual cost (denoted by blue dots).
The metric is optimized
In this example the concept identification process is conducted by optimizing the concept quality metric for three concepts using again a CMA-ES for 370 generations and a population size of 61.
We describe and analyze the results
The process identifies three concepts (Fig. <ref>)
which contain 435 (purple), 1845 (green), and 3054 (yellow) samples (see Exp. 2 in Fig. <ref>).
The remaining samples are not associated with any concept.
In the first description space of the objectives (top right of Fig. <ref>) a primary division based on investment cost is visible.
Concept 1 (purple) and concept 2 (green) are associated with low investment cost, while concept 3 (yellow) shows high investment cost.
An intuitive reciprocal trend is seen in objective space 2 (lower left): concept 3 is associated with low annual cost, while concepts 1 and 2 show high annual cost.
The size of the PV system and the battery are two factors that have a high impact on the investment cost.
Consequently, concept 3 has both, large PV systems and large batteries, while concept 1 utilises small batteries but larger PV systems (P_PV) and concept 2 includes small PV systems but large batteries (C_b).
The configurations of concept 3 lead to a high amount of PV produced electricity, which is fed back into the grid (E_f) to a significantly larger extent than for concept 1 and 2.
Accordingly, this assures overall lower annual cost.
A secondary division between concepts 1 and 2 is present along the objectives maximum power peak (P_p), yearly discharged battery energy (E_d), and mean battery SOC (b̅).
Concept 1 generally has lower power peak values, higher amounts of energy discharged from the battery and a higher average battery SOC than concept 2.
Concept 1 therefore represents configurations where power peak shaving is done resulting in low P_p, which requires a battery with relatively high mean SOC to be readily used as soon as a imminent power peak is detected.
Concept 2 represents low investment cost solutions containing only small PV systems.
However, it includes configurations with batteries of all sizes, but where the battery is not effective used in general.
Many configurations utilize large batteries without specific benefit to the overall system.
Therefore, this concept does not represent one coherent set of configurations, and further analysis is necessary to distinguish useful configurations.
It would be sensible to refine concept 2 in a subsequent post-processing step, e.g., to find the best solutions with small PV systems and small batteries within the concept.
Further insights into the concepts can be gained by analyzing their parameter values (Fig. <ref>).
While the concepts are not different in some parameters like maximum battery SOC b_max, PV inclination and orientation angles α_PV and β_PV, others clearly reveal systematic differences.
For example, the battery controller parameters b_min (minimum battery SOC) and P_c (battery charging thresholds) have characteristically different ranges for the concepts.
In particular, they confirm the difference in the battery utilization between the two low-investment cost concepts 1 (purple) and 2 (green).
On average, the solutions in concept 1 have lower minimum battery SOCs (b_min) and higher battery charging thresholds (P_c) than concept 2.
This results in a behavior of solutions in concept 1, where the batteries are charged even when the building is consuming energy from the grid, and the batteries can be almost completely discharged when necessary.
Concept 3 represents the opposite to concept 1: generally higher investment, but low annual cost, with large PV systems and large batteries.
However, it covers a relatively large span of the objective values, for example in terms of investment cost.
For illustrative purposes, it is assumed that the high-investment cost segment is of interest to the decision maker, and consequently, concept 3 is chosen to be refined in a secondary concept identification step, which will be discussed in detail in the following subsection.
§.§ Experiment 3: Concept identification based on three objectives for high investment cost solutions
In this subsection, only the samples from concept 3 of the previous experiment 2 are selected for further refinement.
Investment cost is chosen as one description space, resilience and annual cost as another, which is the same setup as experiment 1 in Section <ref>.
This way, technically meaningful configurations with high investment cost of the previous analysis can be re-analyzed in terms of the trade-off between annual cost and resilience, while segmenting them according to investment cost again.
We redo a concept identification process with the remaining samples
The concept identification process is again conducted by optimizing the concept quality metric using evolutionary optimization (CMA-ES, 400 generations, population with 22 candidates).
The algorithm is configured to identify three concepts, and no preference samples are specified.
Analysis of the results
The identified concepts are clearly separable with respect to the investment cost and demonstrate the achievable trade-off behavior for annual cost and resilience (Fig. <ref>).
Generally, it can be observed that higher investment allows for better resilience values given the similar or even lower annual cost.
Thus, this further processing step enables the desired fine-grained analysis of the high investment cost configurations and allows for a well-informed decision.
§ DISCUSSION
The experiments demonstrate that concept identification produces meaningful and reasonable groups of energy management configuration options that are technically feasible and economically valid.
Technically, the approach maximizes a concept quality metric using a numerical optimization procedure.
Due to the complexity of the optimization problem, the result is sensitive to small variations in the setup, such as initial conditions or choice of the optimizer <cit.>, and thus most likely only represents local minima of the optimization problem.
In addition, the quality metric operates on technical aspects of the concept distribution, like size and overlap of the concepts, and incorporates the usefulness or desired trade-off relations only indirectly via the user-defined preference samples.
As a result of both these aspects, the concepts identified in a specific setup are not unique and also each concept as a whole does not necessarily need to make sense to the decision maker.
The latter is the case for concept 2 (green) in experiment 2, where, apart from representing low investment cost and large annual cost, multiple different types of configurations were sub-summarized in this concept.
Therefore, the best way to utilize the concept identification method is in an iterative workflow where multiple concept identification processes are chained together, illuminating different aspects of the decision making problem in each analysis step.
The experiments of this work show how the choice of description spaces impacts the energy management configuration concepts derived from the identification process.
While the influence of the partitioning of the feature set into description spaces is generally not straightforward and non-intuitive to some extent, there are some insights on the effect available:
putting features into the same description space allows for each combination of feature values to be represented as a separate concept, regardless of other concepts (given that the concepts do not directly overlap).
On the other hand, putting features into separate description spaces results in a tendency for feature values to have correlations, as only certain subsets of feature combinations are possible to be realized simultaneously.
However, which concepts will be finally identified depends on many factors such as the number of desired concepts, the number and dimension of the several description spaces, the user preferences specified by user-defined samples, the details of the optimization process, and of course also the structure of the data set itself.
In experiment 1, a simple division into investment cost (description space 1) and the combination of annual cost and resilience (description space 2) leads to concepts that are separable (and ranked) based on investment cost and—at the same time—provide trade-off options for the other two objectives.
The first experiment produces valid configuration concepts, though neglects certain issues of technical feasibility and economic sensibility.
These aspects are integrated into the setup of experiment 2.
The allocation of two objectives into the same description space steers the identification process towards trade-offs for them.
A separation into different description spaces introduces a correlation requirement for these particular feature values.
This way, the process can, for example, identify a concept of low investment cost that effectively utilizes the stationary battery for peak power shaving (purple concept 1).
The third experiment illustrates a refinement process for the concept of high investment cost (yellow).
This concept covers a wide range of configurations and for a well-motivated decision in this region a more detailed analysis is desirable.
For the subsequent analysis, the resilience is determined to be an informative objective and therefore, the same setup of the description spaces as in experiment 1 is chosen.
Based on the separation into investment cost (description space 1), as well as annual cost and resilience (description space 2), the high investment cost concept from experiment 2 is divided into three sub-concepts that are themselves ranked with respect to the investment cost and provide trade-off options for annual cost and resilience.
As a result, the decision maker is now in a good position to arrive at a decision for a configuration.
Of course, visualizations of the finally identified concepts in other description spaces or post-processing a selected subset of data again is entirely possible and for complex decisions surely warranted.
But for the demonstrative purpose of this work, we refrain from showing more such in-depth analysis here.
§ CONCLUSIONS
We highlight the importance of the choice of description spaces
We provide guidelines on how to split the full data set into description spaces
We identified energy management configuration concepts with respect to many features, and explain why we chose those description spaces.
We only keep the financially reasonable ones
We identify the final concept only based on three objectives
The identified concepts create insight into the engineering task and might help a decision maker
The present work studies the identification of semantically meaningful concepts in a data set consisting of 20000 Pareto-optimal solutions from a ten-objective building energy management configuration problem.
The three experiments show that the recently proposed concept identification approach can provide valuable insights into complex data sets and support a decision maker in arriving at an informed decision.
The series of experiments conducted here first analyzes the complete data set based on the global objectives of investment cost, annual cost and resilience.
Putting the total investment cost as a separate description space reveals that it is a valid indicator for partitioning the date set into low, medium and high investment cost configurations.
Then, including much more parameters and objectives, technically meaningful and feasible groups are identified as concepts in a second processing step.
For example, one of the identified concepts incorporates configurations where the battery is used for peak shaving.
Another concept includes configurations with high investment cost, which still show a large variety in the other objectives and parameters.
Therefore, as a last step, these high investment cost configurations are further analyzed.
The final results allow the decision maker to select those configurations in the large investment cost regime that meet their criteria for resilience and annual cost.
future work: constructive approach and method to automatically define description spaces
Future work could focus on defining a procedure to automatically assign features of a data set to description spaces, thereby increasing the accessibility of the method, independent of the availability of domain knowledge.
In conclusion, the current work demonstrates the usefulness of identified configuration concepts by gaining insight into the engineering task and the economic reasoning behind the configuration problem of energy management systems.
Due to the non-deterministic nature of the concept identification process and the complexity of the data set, an iterative work-flow with multiple different concept identification processes after another can generate very valuable insights, as shown in this work.
During the process, the decision maker has the chance to revise previously identified sets of data samples.
By choosing different setups of description spaces, several different aspects of the data are illuminated.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.